with Coding Theory
3rd edition
Portfolio Manager: Chelsea Kharakozoua
Content Manager: Jeff Weidenaar
Content Associate: Jonathan Krebs
Content Producer: Tara Corpuz
Managing Producer: Scott Disanno
Producer: Jean Choe
Manager, Courseware QA: Mary Durnwald
Product Marketing Manager: Stacey Sveum
Product and Solution Specialist: Rosemary Morten
Senior Author Support/Technology Specialist: Joe Vetere
Manager, Rights and Permissions: Gina Cheselka
Text and Cover Design, Production Coordination, Composition, and Illustrations: Integra Software Services Pvt. Ltd
Manufacturing Buyer: Carol Melville, LSC Communications
Cover Image: Photographer is my life/Getty Images
Copyright © 2020, 2006, 2002 by Pearson Education, Inc. 221 River Street, Hoboken, NJ 07030. All Rights Reserved. Printed in the United States of America. This publication is protected by copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions department, please visit www.pearsoned.com/
Text Credit: Page 23 Declaration of Independence: A Transcription, The U.S. National Archives and Records Administration.
PEARSON, ALWAYS LEARNING, and MYLAB are exclusive trademarks owned by Pearson Education, Inc. or its affiliates in the U.S. and/or other countries.
Unless otherwise indicated herein, any third-party trademarks that may appear in this work are the property of their respective owners and any references to third-party trademarks, logos or other trade dress are for demonstrative or descriptive purposes only. Such references are not intended to imply any sponsorship, endorsement, authorization, or promotion of Pearson’s products by the owners of such marks, or any relationship between the owner and Pearson Education, Inc. or its affiliates, authors, licensees or distributors.
Library of Congress Cataloging-in-Publication Data
Names: Trappe, Wade, author. | Washington, Lawrence C., author.
Title: Introduction to cryptography : with coding theory / Wade Trappe, Lawrence Washington.
Description: 3rd edition. | [Hoboken, New Jersey] : [Pearson Education], [2020] | Includes bibliographical references and index. | Summary: “This book is based on a course in cryptography at the upper-level undergraduate and beginning graduate level that has been given at the University of Maryland since 1997, and a course that has been taught at Rutgers University since 2003"— Provided by publisher.
Identifiers: LCCN 2019029691 | ISBN 9780134860992 (paperback)
Subjects: LCSH: Coding theory. | Cryptography.
Classification: LCC QA268.T73 2020 | DDC 005.8/24—dc23
LC record available at https:/
ScoutAutomatedPrintCode
ISBN-13: 978-0-13-485906-4
ISBN-10: 0-13-485906-5
This book is based on a course in cryptography at the upper-level undergraduate and beginning graduate level that has been given at the University of Maryland since 1997, and a course that has been taught at Rutgers University since 2003. When designing the courses, we decided on the following requirements:
The courses should be up-to-date and cover a broad selection of topics from a mathematical point of view.
The material should be accessible to mathematically mature students having little background in number theory and computer programming.
There should be examples involving numbers large enough to demonstrate how the algorithms really work.
We wanted to avoid concentrating solely on RSA and discrete logarithms, which would have made the courses mostly about number theory. We also did not want to focus on protocols and how to hack into friends’ computers. That would have made the courses less mathematical than desired.
There are numerous topics in cryptology that can be discussed in an introductory course. We have tried to include many of them. The chapters represent, for the most part, topics that were covered during the different semesters we taught the course. There is certainly more material here than could be treated in most one-semester courses. The first thirteen chapters represent the core of the material. The choice of which of the remaining chapters are used depends on the level of the students and the objectives of the lecturer.
The chapters are numbered, thus giving them an ordering. However, except for Chapter 3 on number theory, which pervades the subject, the chapters are fairly independent of each other and can be covered in almost any reasonable order. Since students have varied backgrounds in number theory, we have collected the basic number theory facts together in Chapter 3 for ease of reference; however, we recommend introducing these concepts gradually throughout the course as they are needed.
The chapters on information theory, elliptic curves, quantum cryptography, lattice methods, and error correcting codes are somewhat more mathematical than the others. The chapter on error correcting codes was included, at the suggestion of several reviewers, because courses that include introductions to both cryptology and coding theory are fairly common.
Suppose you want to give an example for RSA. You could choose two one-digit primes and pretend to be working with fifty-digit primes, or you could use your favorite software package to do an actual example with large primes. Or perhaps you are working with shift ciphers and are trying to decrypt a message by trying all 26 shifts of the ciphertext. This should also be done on a computer.
Additionally, at the end of the book are appendices containing computer examples written in each of Mathematica®, Maple®, MATLAB®, and Sage that show how to do such calculations. These languages were chosen because they are user friendly and do not require prior programming experience. Although the course has been taught successfully without computers, these examples are an integral part of the book and should be studied, if at all possible. Not only do they contain numerical examples of how to do certain computations but also they demonstrate important ideas and issues that arise. They were placed at the end of the book because of the logistic and aesthetic problems of including extensive computer examples in these languages at the ends of chapters.
Additionally, programs available in Mathematica, Maple, and MATLAB can be downloaded from the Web site (bit.ly/2JbcS6p). Homework problems (the computer problems in various chapters) based on the software allow students to play with examples individually. Of course, students having more programming background could write their own programs instead. In a classroom, all that is needed is a computer (with one of the languages installed) and a projector in order to produce meaningful examples as the lecture is being given.
Two major changes have informed this edition: Changes to the field of cryptography and a change in the format of the text. We address these issues separately, although there is an interplay between the two:
Cryptography is a quickly changing field. We have made many changes to the text since the last edition:
Reorganized content previously in two chapters to four separate chapters on Stream Ciphers (including RC4), Block Ciphers, DES and AES (Chapters 5–8, respectively). The RC4 material, in particular, is new.
Heavily revised the chapters on hash functions. Chapter 11 (Hash functions) now includes sections on SHA-2 and SHA-3. Chapter 12 (Hash functions: Attacks and Applications) now includes material on message authentication codes, password protocols, and blockchains.
The short section on the one-time pad has been expanded to become Chapter 4, which includes sections on multiple use of the one-time pad, perfect secrecy, and ciphertext indistinguishability.
Added Chapter 14, “What Can Go Wrong,” which shows what can happen when cryptographic algorithms are used or designed incorrectly.
Expanded Chapter 16 on digital cash to include Bitcoin and cryptocurrencies.
Added Chapter 22, which gives an introduction to Pairing-Based Cryptography.
Updated the exposition throughout the book to reflect recent developments.
Added references to the Maple, Mathematica, MATLAB, and Sage appendices in relevant locations in the text.
Added many new exercises.
Added a section at the back of the book that contains answers or hints to a majority of the odd-numbered problems.
A focus of this revision was transforming the text from a print-based learning tool to a digital learning tool. The eText is therefore filled with content and tools that will help bring the content of the course to life for students in new ways and help improve instruction. Specifically, the following are features that are available only in the eText:
Interactive Examples. We have added a number of opportunities for students to interact with content in a dynamic manner in order to build or enhance understanding. Interactive examples allow students to explore concepts in ways that are not possible without technology.
Quick Questions. These questions, built into the narrative, provide opportunities for students to check and clarify understanding. Some help address potential misconceptions.
Notes, Labels, and Highlights. Notes can be added to the eText by instructors. These notes are visible to all students in the course, allowing instructors to add their personal observations or directions to important topics, call out need-to-know information, or clarify difficult concepts. Students can add their own notes, labels, and highlights to the eText, helping them focus on what they need to study. The customizable Notebook allows students to filter, arrange, and group their notes in a way that makes sense to them.
Dashboard. Instructors can create reading assignments and see the time spent in the eText so that they can plan more effective instruction.
Portability. Portable access lets students read their eText whenever they have a moment in their day, on Android and iOS mobile phones and tablets. Even without an Internet connection, offline reading ensures students never miss a chance to learn.
Ease-of-Use. Straightforward setup makes it easy for instructors to get their class up and reading quickly on the first day of class. In addition, Learning Management System (LMS) integration provides institutions, instructors, and students with single sign-on access to the eText via many popular LMSs.
Supplements. An Instructors’ Solutions Manual can be downloaded by qualified instructors from the textbook’s webpage at www.pearson.com.
Many people helped and provided encouragement during the preparation of this book. First, we would like to thank our students, whose enthusiasm, insights, and suggestions contributed greatly. We are especially grateful to many people who have provided corrections and other input, especially Bill Gasarch, Jeff Adams, Jonathan Rosenberg, and Tim Strobell. We would like to thank Wenyuan Xu, Qing Li, and Pandurang Kamat, who drew several of the diagrams and provided feedback on the new material for the second edition. We have enjoyed working with the staff at Pearson, especially Jeff Weidenaar and Tara Corpuz.
The reviewers deserve special thanks: their suggestions on the exposition and the organization of the topics greatly enhanced the final result. The reviewers marked with an asterisk (*) provided input for this edition.
* Anurag Agarwal, Rochester Institute of Technology
* Pradeep Atrey, University at Albany
Eric Bach, University of Wisconsin
James W. Brewer, Florida Atlantic University
Thomas P. Cahill, NYU
Agnes Chan, Northeastern University
* Nathan Chenette, Rose-Hulman Institute of Technology
* Claude Crépeau, McGill University
* Reza Curtmola, New Jersey Institute of Technology
* Ahmed Desoky, University of Louisville
Anthony Ephremides, University of Maryland, College Park
* David J. Fawcett, Lawrence Tech University
* Jason Gibson, Eastern Kentucky University
* K. Gopalakrishnan, East Carolina University
David Grant, University of Colorado, Boulder
Jugal K. Kalita, University of Colorado, Colorado Springs
* Saroja Kanchi, Kettering University
* Andrew Klapper, University of Kentucky
* Amanda Knecht, Villanova University
Edmund Lamagna, University of Rhode Island
* Aihua Li, Montclair State University
* Spyros S. Magliveras, Florida Atlantic University
* Nathan McNew, Towson University
* Nick Novotny, IUPUI
David M. Pozar, University of Massachusetts, Amherst
* Emma Previato, Boston University
* Hamzeh Roumani, York University
* Bonnie Saunders, University of Illinois, Chicago
* Ravi Shankar, University of Oklahoma
* Ernie Stitzinger, North Carolina State
* Armin Straub, University of South Alabama
J. Felipe Voloch, University of Texas, Austin
Daniel F. Warren, Naval Postgraduate School
* Simon Whitehouse, Alfred State College
Siman Wong, University of Massachusetts, Amherst
* Huapeng Wu, University of Windsor
Wade thanks Nisha Gilra, who provided encouragement and advice; Sheilagh O’Hare for introducing him to the field of cryptography; and K. J. Ray Liu for his support. Larry thanks Susan Zengerle and Patrick Washington for their patience, help, and encouragement during the writing of this book.
Of course, we welcome suggestions and corrections. An errata page can be found at (bit.ly/2J8nN0w) or at the link on the book’s general Web site (bit.ly/2T544yu).
People have always had a fascination with keeping information away from others. As children, many of us had magic decoder rings for exchanging coded messages with our friends and possibly keeping secrets from parents, siblings, or teachers. History is filled with examples where people tried to keep information secret from adversaries. Kings and generals communicated with their troops using basic cryptographic methods to prevent the enemy from learning sensitive military information. In fact, Julius Caesar reportedly used a simple cipher, which has been named after him.
As society has evolved, the need for more sophisticated methods of protecting data has increased. Now, with the information era at hand, the need is more pronounced than ever. As the world becomes more connected, the demand for information and electronic services is growing, and with the increased demand comes increased dependency on electronic systems. Already the exchange of sensitive information, such as credit card numbers, over the Internet is common practice. Protecting data and electronic systems is crucial to our way of living.
The techniques needed to protect data belong to the field of cryptography. Actually, the subject has three names, cryptography, cryptology, and cryptanalysis, which are often used interchangeably. Technically, however, cryptology is the all-inclusive term for the study of communication over nonsecure channels, and related problems. The process of designing systems to do this is called cryptography. Cryptanalysis deals with breaking such systems. Of course, it is essentially impossible to do either cryptography or cryptanalysis without having a good understanding of the methods of both areas.
Often the term coding theory is used to describe cryptography; however, this can lead to confusion. Coding theory deals with representing input information symbols by output symbols called code symbols. There are three basic applications that coding theory covers: compression, secrecy, and error correction. Over the past few decades, the term coding theory has become associated predominantly with error correcting codes. Coding theory thus studies communication over noisy channels and how to ensure that the message received is the correct message, as opposed to cryptography, which protects communication over nonsecure channels.
Although error correcting codes are only a secondary focus of this book, we should emphasize that, in any real-world system, error correcting codes are used in conjunction with encryption, since the change of a single bit is enough to destroy the message completely in a well-designed cryptosystem.
Modern cryptography is a field that draws heavily upon mathematics, computer science, and cleverness. This book provides an introduction to the mathematics and protocols needed to make data transmission and electronic systems secure, along with techniques such as electronic signatures and secret sharing.
In the basic communication scenario, depicted in Figure 1.1, there are two parties, we’ll call them Alice and Bob, who want to communicate with each other. A third party, Eve, is a potential eavesdropper.
When Alice wants to send a message, called the plaintext, to Bob, she encrypts it using a method prearranged with Bob. Usually, the encryption method is assumed to be known to Eve; what keeps the message secret is a key. When Bob receives the encrypted message, called the ciphertext, he changes it back to the plaintext using a decryption key.
Eve could have one of the following goals:
Read the message.
Find the key and thus read all messages encrypted with that key.
Corrupt Alice’s message into another message in such a way that Bob will think Alice sent the altered message.
Masquerade as Alice, and thus communicate with Bob even though Bob believes he is communicating with Alice.
Which case we’re in depends on how evil Eve is. Cases (3) and (4) relate to issues of integrity and authentication, respectively. We’ll discuss these shortly. A more active and malicious adversary, corresponding to cases (3) and (4), is sometimes called Mallory in the literature. More passive observers (as in cases (1) and (2)) are sometimes named Oscar. We’ll generally use only Eve, and assume she is as bad as the situation allows.
There are four main types of attack that Eve might be able to use. The differences among these types of attacks are the amounts of information Eve has available to her when trying to determine the key. The four attacks are as follows:
Ciphertext only: Eve has only a copy of the ciphertext.
Known plaintext: Eve has a copy of a ciphertext and the corresponding plaintext. For example, suppose Eve intercepts an encrypted press release, then sees the decrypted release the next day. If she can deduce the decryption key, and if Alice doesn’t change the key, Eve can read all future messages. Or, if Alice always starts her messages with “Dear Bob,” then Eve has a small piece of ciphertext and corresponding plaintext. For many weak cryptosystems, this suffices to find the key. Even for stronger systems such as the German Enigma machine used in World War II, this amount of information has been useful.
Chosen plaintext: Eve gains temporary access to the encryption machine. She cannot open it to find the key; however, she can encrypt a large number of suitably chosen plaintexts and try to use the resulting ciphertexts to deduce the key.
Chosen ciphertext: Eve obtains temporary access to the decryption machine, uses it to “decrypt” several strings of symbols, and tries to use the results to deduce the key.
A chosen plaintext attack could happen as follows. You want to identify an airplane as friend or foe. Send a random message to the plane, which encrypts the message automatically and sends it back. Only a friendly airplane is assumed to have the correct key. Compare the message from the plane with the correctly encrypted message. If they match, the plane is friendly. If not, it’s the enemy. However, the enemy can send a large number of chosen messages to one of your planes and look at the resulting ciphertexts. If this allows them to deduce the key, the enemy can equip their planes so they can masquerade as friendly.
An example of a known plaintext attack reportedly happened in World War II in the Sahara Desert. An isolated German outpost every day sent an identical message saying that there was nothing new to report, but of course it was encrypted with the key being used that day. So each day the Allies had a plaintext-ciphertext pair that was extremely useful in determining the key. In fact, during the Sahara campaign, General Montgomery was carefully directed around the outpost so that the transmissions would not be stopped.
One of the most important assumptions in modern cryptography is Kerckhoffs’s principle: In assessing the security of a cryptosystem, one should always assume the enemy knows the method being used. This principle was enunciated by Auguste Kerckhoffs in 1883 in his classic treatise La Cryptographie Militaire. The enemy can obtain this information in many ways. For example, encryption/decryption machines can be captured and analyzed. Or people can defect or be captured. The security of the system should therefore be based on the key and not on the obscurity of the algorithm used. Consequently, we always assume that Eve has knowledge of the algorithm that is used to perform encryption.
Encryption/decryption methods fall into two categories: symmetric key and public key. In symmetric key algorithms, the encryption and decryption keys are known to both Alice and Bob. For example, the encryption key is shared and the decryption key is easily calculated from it. In many cases, the encryption key and the decryption key are the same. All of the classical (pre-1970) cryptosystems are symmetric, as are the more recent Data Encryption Standard (DES) and Advanced Encryption Standard (AES).
Public key algorithms were introduced in the 1970s and revolutionized cryptography. Suppose Alice wants to communicate securely with Bob, but they are hundreds of kilometers apart and have not agreed on a key to use. It seems almost impossible for them to do this without first getting together to agree on a key, or using a trusted courier to carry the key from one to the other. Certainly Alice cannot send a message over open channels to tell Bob the key, and then send the ciphertext encrypted with this key. The amazing fact is that this problem has a solution, called public key cryptography. The encryption key is made public, but it is computationally infeasible to find the decryption key without information known only to Bob. The most popular implementation is RSA (see Chapter 9), which is based on the difficulty of factoring large integers. Other versions (see Chapters 10, 23, and 24) are the ElGamal system (based on the discrete log problem), NTRU (lattice based) and the McEliece system (based on error correcting codes).
Here is a nonmathematical way to do public key communication. Bob sends Alice a box and an unlocked padlock. Alice puts her message in the box, locks Bob’s lock on it, and sends the box back to Bob. Of course, only Bob can open the box and read the message. The public key methods mentioned previously are mathematical realizations of this idea. Clearly there are questions of authentication that must be dealt with. For example, Eve could intercept the first transmission and substitute her own lock. If she then intercepts the locked box when Alice sends it back to Bob, Eve can unlock her lock and read Alice’s message. This is a general problem that must be addressed with any such system.
Public key cryptography represents what is possibly the final step in an interesting historical progression. In the earliest years of cryptography, security depended on keeping the encryption method secret. Later, the method was assumed known, and the security depended on keeping the (symmetric) key private or unknown to adversaries. In public key cryptography, the method and the encryption key are made public, and everyone knows what must be done to find the decryption key. The security rests on the fact (or hope) that this is computationally infeasible. It’s rather paradoxical that an increase in the power of cryptographic algorithms over the years has corresponded to an increase in the amount of information given to an adversary about such algorithms.
Public key methods are very powerful, and it might seem that they make the use of symmetric key cryptography obsolete. However, this added flexibility is not free and comes at a computational cost. The amount of computation needed in public key algorithms is typically several orders of magnitude more than the amount of computation needed in algorithms such as DES or AES/Rijndael. The rule of thumb is that public key methods should not be used for encrypting large quantities of data. For this reason, public key methods are used in applications where only small amounts of data must be processed (for example, digital signatures and sending keys to be used in symmetric key algorithms).
Within symmetric key cryptography, there are two types of ciphers: stream ciphers and block ciphers. In stream ciphers, the data are fed into the algorithm in small pieces (bits or characters), and the output is produced in corresponding small pieces. We discuss stream ciphers in Chapter 5. In block ciphers, however, a block of input bits is collected and fed into the algorithm all at once, and the output is a block of bits. Mostly we shall be concerned with block ciphers. In particular, we cover two very significant examples. The first is DES, and the second is AES, which was selected in the year 2000 by the National Institute for Standards and Technology as the replacement for DES. Public key methods such as RSA can also be regarded as block ciphers.
Finally, we mention a historical distinction between different types of encryption, namely codes and ciphers. In a code, words or certain letter combinations are replaced by codewords (which may be strings of symbols). For example, the British navy in World War I used 03680C, 36276C, and 50302C to represent shipped at, shipped by, and shipped from, respectively. Codes have the disadvantage that unanticipated words cannot be used. A cipher, on the other hand, does not use the linguistic structure of the message but rather encrypts every string of characters, meaningful or not, by some algorithm. A cipher is therefore more versatile than a code. In the early days of cryptography, codes were commonly used, sometimes in conjunction with ciphers. They are still used today; covert operations are often given code names. However, any secret that is to remain secure needs to be encrypted with a cipher. In this book, we’ll deal exclusively with ciphers.
The security of cryptographic algorithms is a difficult property to measure. Most algorithms employ keys, and the security of the algorithm is related to how difficult it is for an adversary to determine the key. The most obvious approach is to try every possible key and see which ones yield meaningful decryptions. Such an attack is called a brute force attack. In a brute force attack, the length of the key is directly related to how long it will take to search the entire keyspace. For example, if a key is long, then there are The DES algorithm has a and thus has
In many situations we’ll encounter in this book, it will seem that a system can be broken by simply trying all possible keys. However, this is often easier said than done. Suppose you need to try possibilities and you have a computer that can do such calculations each second. There are around in a year, so it would take a little more than to complete the task, longer than the predicted life of the universe.
Longer keys are advantageous but are not guaranteed to make an adversary’s task difficult. The algorithm itself also plays a critical role. Some algorithms might be able to be attacked by means other than brute force, and some algorithms just don’t make very efficient use of their keys’ bits. This is a very important point to keep in mind. Not all algorithms are created equal!
For example, one of the easiest cryptosystems to break is the substitution cipher, which we discuss in Section 2.4. The number of possible keys is . In contrast, DES (see Chapter 7) has only But it typically takes over a day on a specially designed computer to find a DES key. The difference is that an attack on a substitution cipher uses the underlying structure of the language, while the attack on DES is by brute force, trying all possible keys.
A brute force attack should be the last resort. A cryptanalyst always hopes to find an attack that is faster. Examples we’ll meet are frequency analysis (for the substitution and Vigenère ciphers) and birthday attacks (for discrete logs).
We also warn the reader that just because an algorithm seems secure now, we can’t assume that it will remain so. Human ingenuity has led to creative attacks on cryptographic protocols. There are many examples in modern cryptography where an algorithm or protocol was successfully attacked because of a loophole presented by poor implementation, or just because of advances in technology. The DES algorithm, which withstood years of cryptographic scrutiny, ultimately succumbed to attacks by a well-designed parallel computer. Even as you read this book, research in quantum computing is underway, which could dramatically alter the terrain of future cryptographic algorithms.
For example, the security of several systems we’ll study depends on the difficulty of factoring large integers, say of around 600 digits. Suppose you want to factor a number of this size. The method used in elementary school is to divide by all of the primes up to the square root of . There are approximately less than . Trying each one is impossible. The number of electrons in the universe is estimated to be less than . Long before you finish your calculation, you’ll get a call from the electric company asking you to stop. Clearly, more sophisticated factoring algorithms must be used, rather than this brute force type of attack. When RSA was invented, there were some good factoring algorithms available, but it was predicted that a 129-digit number such as the RSA challenge number (see Chapter 9) would not be factored within the foreseeable future. However, advances in algorithms and computer architecture have made such factorizations fairly routine (although they still require substantial computing resources), so now numbers of several hundred digits are recommended for security. But if a full-scale quantum computer is ever built, factorizations of even these numbers will be easy, and the whole RSA scheme (along with many other methods) will need to be reconsidered.
A natural question, therefore, is whether there are any unbreakable cryptosystems, and, if so, why aren’t they used all the time?
The answer is yes; there is a system, known as the one-time pad, that is unbreakable. Even a brute force attack will not yield the key. But the unfortunate truth is that the expense of using a one-time pad is enormous. It requires exchanging a key that is as long as the plaintext, and even then the key can only be used once. Therefore, one opts for algorithms that, when implemented correctly with the appropriate key size, are unbreakable in any reasonable amount of time.
An important point when considering key size is that, in many cases, one can mathematically increase security by a slight increase in key size, but this is not always practical. If you are working with chips that can handle words of 64 bits, then an increase in the key size from 64 to 65 bits could mean redesigning your hardware, which could be expensive. Therefore, designing good cryptosystems involves both mathematical and engineering considerations.
Finally, we need a few words about the size of numbers. Your intuition might say that working with a 20-digit number takes twice as long as working with a 10-digit number. That is true in some algorithms. However, if you count up to , you are not even close to ; you are only one 10 billionth of the way there. Similarly, a brute force attack against a 60-bit key takes a billion times longer than one against a 30-bit key.
There are two ways to measure the size of numbers: the actual magnitude of the number , and the number of digits in its decimal representation (we could also use its binary representation), which is approximately The number of single-digit multiplications needed to square a , using the standard algorithm from elementary school, is , or approximately The number of divisions needed to factor a number by dividing by all primes up to the square root of is around . An algorithm that runs in time a power of is much more desirable than one that runs in time a power of . In the present example, if we double the number of digits in , the time it takes to square increases by a factor of 4, while the time it takes to factor increases enormously. Of course, there are better algorithms available for both of these operations, but, at present, factorization takes significantly longer than multiplication.
We’ll meet algorithms that take time a power of to perform certain calculations (for example, finding greatest common divisors and doing modular exponentiation). There are other computations for which the best known algorithms run only slightly better than a power of (for example, factoring and finding discrete logarithms). The interplay between the fast algorithms and the slower ones is the basis of several cryptographic algorithms that we’ll encounter in this book.
Cryptography is not only about encrypting and decrypting messages, it is also about solving real-world problems that require information security. There are four main objectives that arise:
Confidentiality: Eve should not be able to read Alice’s message to Bob. The main tools are encryption and decryption algorithms.
Data integrity: Bob wants to be sure that Alice’s message has not been altered. For example, transmission errors might occur. Also, an adversary might intercept the transmission and alter it before it reaches the intended recipient. Many cryptographic primitives, such as hash functions, provide methods to detect data manipulation by malicious or accidental adversaries.
Authentication: Bob wants to be sure that only Alice could have sent the message he received. Under this heading, we also include identification schemes and password protocols (in which case, Bob is the computer). There are actually two types of authentication that arise in cryptography: entity authentication and data-origin authentication. Often the term identification is used to specify entity authentication, which is concerned with proving the identity of the parties involved in a communication. Data-origin authentication focuses on tying the information about the origin of the data, such as the creator and time of creation, with the data.
Non-repudiation: Alice cannot claim she did not send the message. Non-repudiation is particularly important in electronic commerce applications, where it is important that a consumer cannot deny the authorization of a purchase.
Authentication and non-repudiation are closely related concepts, but there is a difference. In a symmetric key cryptosystem, Bob can be sure that a message comes from Alice (or someone who knows Alice’s key) since no one else could have encrypted the message that Bob decrypts successfully. Therefore, authentication is automatic. However, he cannot prove to anyone else that Alice sent the message, since he could have sent the message himself. Therefore, non-repudiation is essentially impossible. In a public key cryptosystem, both authentication and non-repudiation can be achieved (see Chapters 9, 13, and 15).
Much of this book will present specific cryptographic applications, both in the text and as exercises. Here is an overview.
Digital signatures: One of the most important features of a paper and ink letter is the signature. When a document is signed, an individual’s identity is tied to the message. The assumption is that it is difficult for another person to forge the signature onto another document. Electronic messages, however, are very easy to copy exactly. How do we prevent an adversary from cutting the signature off one document and attaching it to another electronic document? We shall study cryptographic protocols that allow for electronic messages to be signed in such a way that everyone believes that the signer was the person who signed the document, and such that the signer cannot deny signing the document.
Identification: When logging into a machine or initiating a communication link, a user needs to identify herself or himself. But simply typing in a user name is not sufficient as it does not prove that the user is really who he or she claims to be. Typically a password is used. We shall touch upon various methods for identifying oneself. In the chapter on DES we discuss password files. Later, we present the Feige-Fiat-Shamir identification scheme, which is a zero-knowledge method for proving identity without revealing a password.
Key establishment: When large quantities of data need to be encrypted, it is best to use symmetric key encryption algorithms. But how does Alice give the secret key to Bob when she doesn’t have the opportunity to meet him personally? There are various ways to do this. One way uses public key cryptography. Another method is the Diffie-Hellman key exchange algorithm. A different approach to this problem is to have a trusted third party give keys to Alice and Bob. Two examples are Blom’s key generation scheme and Kerberos, which is a very popular symmetric cryptographic protocol that provides authentication and security in key exchange between users on a network.
Secret sharing: In Chapter 17, we introduce secret sharing schemes. Suppose that you have a combination to a bank safe, but you don’t want to trust any single person with the combination to the safe. Rather, you would like to divide the combination among a group of people, so that at least two of these people must be present in order to open the safe. Secret sharing solves this problem.
Security protocols: How can we carry out secure transactions over open channels such as the Internet, and how can we protect credit card information from fraudulent merchants? We discuss various protocols, such as SSL and SET.
Electronic cash: Credit cards and similar devices are convenient but do not provide anonymity. Clearly a form of electronic cash could be useful, at least to some people. However, electronic entities can be copied. We give an example of an electronic cash system that provides anonymity but catches counterfeiters, and we discuss cryptocurrencies, especially Bitcoin.
Games: How can you flip coins or play poker with people who are not in the same room as you? Dealing the cards, for example, presents a problem. We show how cryptographic ideas can solve these problems.
Methods of making messages unintelligible to adversaries have been important throughout history. In this chapter we shall cover some of the older cryptosystems that were primarily used before the advent of the computer. These cryptosystems are too weak to be of much use today, especially with computers at our disposal, but they give good illustrations of several of the important ideas of cryptology.
First, for these simple cryptosystems, we make some conventions.
plaintext will be written in lowercase letters and CIPHERTEXT will be written in capital letters (except in the computer problems).
The letters of the alphabet are assigned numbers as follows:
Note that we start with , so is letter number 25. Because many people are accustomed to being 1 and being 26, the present convention can be annoying, but it is standard for the elementary cryptosystems that we’ll consider.
Spaces and punctuation are omitted. This is even more annoying, but it is almost always possible to replace the spaces in the plaintext after decrypting. If spaces were left in, there would be two choices. They could be left as spaces; but this yields so much information on the structure of the message that decryption becomes easier. Or they could be encrypted; but then they would dominate frequency counts (unless the message averages at least eight letters per word), again simplifying decryption.
Note: In this chapter, we’ll be using some concepts from number theory, especially modular arithmetic. If you are not familiar with congruences, you should read the first three sections of Chapter 3 before proceeding.
One of the earliest cryptosystems is often attributed to Julius Caesar. Suppose he wanted to send a plaintext such as
but he didn’t want Brutus to read it. He shifted each letter backwards by three places, so became became became etc. The beginning of the alphabet wrapped around to the end, so became became and became The ciphertext was then
Decryption was accomplished by shifting FORWARD by three spaces (and trying to figure out how to put the spaces back in).
We now give the general situation. If you are not familiar with modular arithmetic, read the first few pages of Chapter 3 before continuing.
Label the letters as integers from 0 to 25. The key is an integer with . The encryption process is
Decryption is . For example, Caesar used .
Let’s see how the four types of attack work.
Ciphertext only: Eve has only the ciphertext. Her best strategy is an exhaustive search, since there are only 26 possible keys. See Example 1 in the Computer Appendices. If the message is longer than a few letters (we will make this more precise later when we discuss entropy), it is unlikely that there is more than one meaningful message that could be the plaintext. If you don’t believe this, try to find some words of four or more letters that are shifts of each other. Three such words are given in Exercises 1 and 2. Another possible attack, if the message is sufficiently long, is to do a frequency count for the various letters. The letter occurs most frequently in most English texts. Suppose the letter appears most frequently in the ciphertext. Since and , a reasonable guess is that . However, for shift ciphers this method takes much longer than an exhaustive search, plus it requires many more letters in the message in order for it to work (anything short, such as this, might not contain a common symbol, thus changing statistical counts).
Known plaintext: If you know just one letter of the plaintext along with the corresponding letter of ciphertext, you can deduce the key. For example, if you know encrypts to , then the key is .
Chosen plaintext: Choose the letter as the plaintext. The ciphertext gives the key. For example, if the ciphertext is , then the key is 7.
Chosen ciphertext: Choose the letter as ciphertext. The plaintext is the negative of the key. For example, if the plaintext is , the key is .
The shift ciphers may be generalized and slightly strengthened as follows. Choose two integers and , with , and consider the function (called an affine function)
For example, let and , so we are working with . Take a plaintext letter such as . It is encrypted to , which is the letter . Using the same function, we obtain
How do we decrypt? If we were working with rational numbers rather than mod 26, we would start with and solve: . But needs to be reinterpreted when we work mod 26. Since , there is a multiplicative inverse for (if this last sentence doesn’t make sense to you, read Section 3.3 now). In fact, , so 3 is the desired inverse and can be used in place of . We therefore have
Let’s try this. The letter is mapped to , which is the letter . Similarly, we see that the ciphertext is decrypted back to affine. For more examples, see Examples 2 and 3 in the Computer Appendices.
Suppose we try to use the function as our encryption function. We obtain
If we alter the input, we obtain
Clearly this function leads to errors. It is impossible to decrypt, since several plaintexts yield the same ciphertext. In particular, we note that encryption must be one-to-one, and this fails in the present case.
What goes wrong in this example? If we solve , we obtain . But does not exist mod 26 since . More generally, it can be shown that is a one-to-one function mod 26 if and only if . In this case, decryption uses , where . So decryption is also accomplished by an affine function.
The key for this encryption method is the pair . There are 12 possible choices for with and there are 26 choices for (since we are working mod 26, we only need to consider and between 0 and 25). Therefore, there are choices for the key.
Let’s look at the possible attacks.
Ciphertext only: An exhaustive search through all 312 keys would take longer than the corresponding search in the case of the shift cipher; however, it would be very easy to do on a computer. When all possibilities for the key are tried, a fairly short ciphertext, say around 20 characters, will probably correspond to only one meaningful plaintext, thus allowing the determination of the key. It would also be possible to use frequency counts, though this would require much longer texts.
Known plaintext: With a little luck, knowing two letters of the plaintext and the corresponding letters of the ciphertext suffices to find the key. In any case, the number of possibilities for the key is greatly reduced and a few more letters should yield the key.
For example, suppose the plaintext starts with if and the corresponding ciphertext is . In numbers, this means that 8 maps to 15 and 5 maps to 16. Therefore, we have the equations
Subtracting yields , which has the unique solution . Using the first equation, we find , which yields .
Suppose instead that the plaintext go corresponds to the ciphertext . We obtain the equations
Subtracting yields . Since , this has two solutions: . The corresponding values of are both 15 (this is not a coincidence; it will always happen this way when the coefficients of in the equations are even). So we have two candidates for the key: and . However, so the second is ruled out. Therefore, the key is .
The preceding procedure works unless the gcd we get is 13 (or 26). In this case, use another letter of the message, if available.
If we know only one letter of plaintext, we still get a relation between and . For example, if we only know that in plaintext corresponds to in ciphertext, then we have . There are 12 possibilities for and each gives one corresponding . Therefore, an exhaustive search through the 12 keys should yield the correct key.
Chosen plaintext: Choose as the plaintext. The first character of the ciphertext will be , and the second will be . Therefore, we can find the key.
Chosen ciphertext: Choose as the ciphertext. This yields the decryption function of the form . We could solve for and obtain the encryption key. But why bother? We have the decryption function, which is what we want.
A variation of the shift cipher was invented back in the sixteenth century. It is often attributed to Vigenère, though Vigenère’s encryption methods were more sophisticated. Well into the twentieth century, this cryptosystem was thought by many to be secure, though Babbage and Kasiski had shown how to attack it during the nineteenth century. In the 1920s, Friedman developed additional methods for breaking this and related ciphers.
The key for the encryption is a vector, chosen as follows. First choose a key length, for example, 6. Then choose a vector of this size whose entries are integers from 0 to 25, for example . Often the key corresponds to a word that is easily remembered. In our case, the word is vector. The security of the system depends on the fact that neither the keyword nor its length is known.
To encrypt the message using the in our example, we take first the letter of the plaintext and shift by 21. Then shift the second letter by 4, the third by 2, and so on. Once we get to the end of the key, we start back at its first entry, so the seventh letter is shifted by 21, the eighth letter by 4, etc. Here is a diagram of the encryption process.
A known plaintext attack will succeed if enough characters are known since the key is simply obtained by subtracting the plaintext from the ciphertext mod 26. A chosen plaintext attack using the plaintext will yield the key immediately, while a chosen ciphertext attack with yields the negative of the key. But suppose you have only the ciphertext. It was long thought that the method was secure against a ciphertext-only attack. However, it is easy to find the key in this case, too.
The cryptanalysis uses the fact that in most English texts the frequencies of letters are not equal. For example, occurs much more frequently than . These frequencies have been tabulated in [Beker-Piper] and are provided in Table 2.1.
Of course, variations can occur, though usually it takes a certain amount of effort to produce them. There is a book Gadsby by Ernest Vincent Wright that does not contain the letter . Even more impressive is the book La Disparition by George Perec, written in French, which also does not have a single (not only are there the usual problems with verbs, etc., but almost all feminine nouns and adjectives must be avoided). There is an English translation by Gilbert Adair, A Void, which also does not contain . But generally we can assume that the above gives a rough estimate of what usually happens, as long as we have several hundred characters of text.
If we had a simple shift cipher, then the letter , for example, would always appear as a certain ciphertext letter, which would then have the same frequency as that of in the original text. Therefore, a frequency analysis would probably reveal the key. However, in the preceding example of a Vigenère cipher, the letter appears as both and . If we had used a longer plaintext, would probably have been encrypted as each of , , , , , and , corresponding to the shifts 21, 4, 2, 19, 14, 17. But the occurrences of in a ciphertext might not come only from . The letter is also encrypted to when its position in the text is such that it is shifted by 4. Similarly, , , , and can contribute to the ciphertext, so the frequency of is a combination of that of , , , , , and from the plaintext. Therefore, it appears to be much more difficult to deduce anything from a frequency count. In fact, the frequency counts are usually smoothed out and are much closer to 1/26 for each letter of ciphertext. At least, they should be much closer than the original distribution for English letters.
Here is a more substantial example. This example is also treated in Example 4 in the Computer Appendices. The ciphertext is the following:
VVHQWVVRHMUSGJGTHKIHTSSEJCHLSFCBGVWCRLRYQTFSVGAHW KCUHWAUGLQHNSLRLJSHBLTSPISPRDXLJSVEEGHLQWKASSKUWE PWQTWVSPGOELKCQYFNSVWLJSNIQKGNRGYBWLWGOVIOKHKAZKQ KXZGYHCECMEIUJOQKWFWVEFQHKIJRCLRLKBIENQFRJLJSDHGR HLSFQTWLAUQRHWDMWLGUSGIKKFLRYVCWVSPGPMLKASSJVOQXE GGVEYGGZMLJCXXLJSVPAIVWIKVRDRYGFRJLJSLVEGGVEYGGEI APUUISFPBTGNWWMUCZRVTWGLRWUGUMNCZVILE
The frequencies are as follows:
| A | B | C | D | E | F | G | H | I | J | K | L | M |
| 8 | 5 | 12 | 4 | 15 | 10 | 27 | 16 | 13 | 14 | 17 | 25 | 7 |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| 7 | 5 | 9 | 14 | 17 | 24 | 8 | 12 | 22 | 22 | 5 | 8 | 5 |
Note that there is no letter whose frequency is significantly larger than the others. As discussed previously, this is because , for example, gets spread among several letters during the encryption process.
How do we decrypt the message? There are two steps: finding the key length and finding the key. In the following, we’ll first show how to find the key length and then give one way to find the key. After an explanation of why the method for finding the key works, we give an alternative way to find the key.
Write the ciphertext on a long strip of paper, and again on another long strip. Put one strip above the other, but displaced by a certain number of places (the potential key length). For example, for a displacement of two we have the following:
| V | V | H | Q | W | V | V | R | H | M | U | S | G | J | G | ||
| V | V | H | Q | W | V | V | R | H | M | U | S | G | J | G | T | H |
| * | ||||||||||||||||
| T | H | K | I | H | T | S | S | E | J | C | H | L | S | F | C | B |
| K | I | H | T | S | S | E | J | C | H | L | S | F | C | B | G | V |
| * | ||||||||||||||||
| G | V | W | C | R | L | R | Y | Q | T | F | S | V | G | A | H | |
| W | C | R | L | R | Y | Q | T | F | S | V | G | A | H | W | K | |
| * |
Mark a each time a letter and the one below it are the same, and count the total number of coincidences. In the text just listed, we have two coincidences so far. If we had continued for the entire ciphertext, we would have counted 14 of them. If we do this for different displacements, we obtain the following data:
| displacement: | 1 | 2 | 3 | 4 | 5 | 6 |
| coincidences: | 14 | 14 | 16 | 14 | 24 | 12 |
We have the most coincidences for a shift of 5. As we explain later, this is the best guess for the length of the key. This method works very quickly, even without a computer, and usually yields the key length.
Now suppose we have determined the key length to be 5, as in our example. Look at the 1st, 6th, 11th, ... letters and see which letter occurs most frequently. We obtain
| A | B | C | D | E | F | G | H | I | J | K | L | M |
| 0 | 0 | 7 | 1 | 1 | 2 | 9 | 0 | 1 | 8 | 8 | 0 | 0 |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| 3 | 0 | 4 | 5 | 2 | 0 | 3 | 6 | 5 | 1 | 0 | 1 | 0 |
The most frequent is , though are close behind. However, would mean a shift of 5, hence . But this would yield an unusually high frequency for in the plaintext. Similarly, would mean and , both of which have too high frequencies. Finally, would require , which is unlikely to be the case. Therefore, we decide that and the first element of the key is .
We now look at the 2nd, 7th, 12th, ... letters. We find that occurs 10 times and occurs 12 times, and the other letters are far behind. If , then , which should not occur 12 times in the plaintext. Therefore, and the second element of the key is .
Now look at the 3rd, 8th, 13th, ... letters. The frequencies are
| A | B | C | D | E | F | G | H | I | J | K | L | M |
| 0 | 1 | 0 | 3 | 3 | 1 | 3 | 5 | 1 | 0 | 4 | 10 | 0 |
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| 2 | 1 | 2 | 3 | 5 | 3 | 0 | 2 | 8 | 7 | 1 | 0 | 1 |
The initial guess that runs into problems; for example, and have too high frequency and has too low. Similarly, and do not seem likely. The best choice is and therefore the third key element is .
The 4th, 9th, 14th, ... letters yield as the fourth element of the key. Finally, the 5th, 10th, 15th, ... letters yield as the final key element. Our guess for the key is therefore
As we saw in the case of the 3rd, 8th, 13th, ... letters (this also happened in the 5th, 10th, 15th, ... case), if we take every fifth letter we have a much smaller sample of letters on which we are doing a frequency count. Another letter can overtake in a short sample. But it is probable that most of the high-frequency letters appear with high frequencies, and most of the low-frequency ones appear with low frequencies. As in the present case, this is usually sufficient to identify the corresponding entry in the key.
Once a potential key is found, test it by using it to decrypt. It should be easy to tell whether it is correct.
In our example, the key is conjectured to be . If we decrypt the ciphertext using this key, we obtain
themethodusedforthepreparationandreadingofcodemessagesis
simpleintheextremeandatthesametimeimpossibleoftranslatio
nunlessthekeyisknowntheeasewithwhichthekeymaybechangedis
anotherpointinfavoroftheadoptionofthiscodebythosedesirin
gtotransmitimportantmessageswithouttheslightestdangeroft
heirmessagesbeingreadbypoliticalorbusinessrivalsetc
This passage is taken from a short article in Scientific American, Supplement LXXXIII (January 27, 1917), page 61. A short explanation of the Vigenère cipher is given, and the preceding passage expresses an opinion as to its security.
Before proceeding to a second method for finding the key, we give an explanation of why the procedure given earlier finds the key length. In order to avoid confusion, we note that when we use the word “shift” for a letter, we are referring to what happens during the Vigenère encryption process.
We also will be shifting elements in vectors. However, when we slide one strip of paper to the right or left relative to the other strip, we use the word “displacement.”
Put the frequencies of English letters into a vector:
Let be the result of shifting by spaces to the right. For example,
The dot product of with itself is
Of course, is also equal to .066 since we get the same sum of products, starting with a different term. However, the dot products of are much lower when , ranging from .031 to .045:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | |
| .066 | .039 | .032 | .034 | .044 | .033 | .036 | |
| 7 | 8 | 9 | 10 | 11 | 12 | 13 | |
| .039 | .034 | .034 | .038 | .045 | .039 | .042 |
The dot product depends only on . This can be seen as follows. The entries in the vectors are the same as those in , but shifted. In the dot product, the th entry of is multiplied by the th entry, the st times the st, etc. So each element is multiplied by the element positions removed from it. Therefore, the dot product depends only on the difference . However, by reversing the roles of and , and noting that , we see that and give the same dot products, so the dot product only depends on . In the preceding table, we only needed to compute up to . For example, corresponds to a shift by 17 in one direction, or 9 in the other direction, so will give the same dot product.
The reason is higher than the other dot products is that the large numbers in the vectors are paired with large numbers and the small ones are paired with small. In the other dot products, the large numbers are paired somewhat randomly with other numbers. This lessens their effect. For another reason that is higher than the other dot products, see Exercise 23.
Let’s assume that the distribution of letters in the plaintext closely matches that of English, as expressed by the vector above. Look at a random letter in the top strip of ciphertext. It corresponds to a random letter of English shifted by some amount (corresponding to an element of the key). The letter below it corresponds to a random letter of English shifted by some amount .
For concreteness, let’s suppose that and . The probability that the letter in the 50th position, for example, is is given by the first entry in , namely .082. The letter directly below, on the second strip, has been shifted from the original plaintext by positions. If this ciphertext letter is , then the corresponding plaintext letter was , which occurs in the plaintext with probability .020. Note that .020 is the first entry of the vector . The probability that the letter in the 50th position on the first strip and the letter directly below it are both the letter is . Similarly, the probability that both letters are is . Working all the way through , we see that the probability that the two letters are the same is
In general, when the encryption shifts are and , the probability that the two letters are the same is . When , this is approximately , but if , then the dot product is .
We are in the situation where exactly when the letters lying one above the other have been shifted by the same amount during the encryption process, namely when the top strip is displaced by an amount equal to the key length (or a multiple of the key length). Therefore we expect more coincidences in this case.
For a displacement of 5 in the preceding ciphertext, we had 326 comparisons and 24 coincidences. By the reasoning just given, we should expect approximately coincidences, which is close to the actual value.
Using the preceding ideas, we give another method for determining the key. It seems to work somewhat better than the first method on short samples, though it requires a little more calculation.
We’ll continue to work with the preceding example. To find the first element of the key, count the occurrences of the letters in the 1st, 6th, 11th, ... positions, as before, and put them in a vector:
(the first entry gives the number of occurrences of , the second gives the number of occurrences of , etc.). If we divide by 67, which is the total number of letters counted, we obtain a vector
Let’s think about where this vector comes from. Since we know the key length is 5, the 1st, 6th, 11th, ... letters in the ciphertext were all shifted by the same amount (as we’ll see shortly, they were all shifted by 2). Therefore, they represent a random sample of English letters, all shifted by the same amount. Their frequencies, which are given by the vector should approximate the vector where is the shift caused by the first element of the key.
The problem now is to determine . Recall that is largest when , and that W approximates . If we compute for , the maximum value should occur when . Here are the dot products:
The largest value is the third, namely .0713, which equals . Therefore, we guess that the first shift is 2, which corresponds to the key letter .
Let’s use the same method to find the third element of the key. We calculate a new vector W, using the frequencies for the 3rd, 8th, 13th, ... letters that we tabulated previously:
The dot products for are
The largest of these values is the fourth, namely .0624, which equals . Therefore, the best guess is that the first shift is 3, which corresponds to the key letter . The other three elements of the key can be found similarly, again yielding as the key.
Notice that the largest dot product was significantly larger than the others in both cases, so we didn’t have to make several guesses to find the correct one. In this way, the present method is superior to the first method presented; however, the first method is much easier to do by hand.
Why is the present method more accurate than the first one? To obtain the largest dot product, several of the larger values in W had to match with the larger values in an In the earlier method, we tried to match only the , then looked at whether the choices for other letters were reasonable. The present method does this all in one step.
To summarize, here is the method for finding the key. Assume we already have determined that the key length is .
For to , do the following:
Compute the frequencies of the letters in positions mod , and form the vector W.
For to , compute .
Let give the maximum value of .
The key is probably .
One of the more popular cryptosystems is the substitution cipher. It is commonly used in the puzzle section of the weekend newspapers, for example. The principle is simple: Each letter in the alphabet is replaced by another (or possibly the same) letter. More precisely, a permutation of the alphabet is chosen and applied to the plaintext. In the puzzle pages, the spaces between the words are usually preserved, which is a big advantage to the solver, since knowledge of word structure becomes very useful. However, to increase security it is better to omit the spaces.
The shift and affine ciphers are examples of substitution ciphers. The Vigenère cipher (see Section 2.3) is not, since it permutes blocks of letters rather than one letter at a time.
Everyone “knows” that substitution ciphers can be broken by frequency counts. However, the process is more complicated than one might expect.
Consider the following example. Thomas Jefferson has a potentially treasonous message that he wants to send to Ben Franklin. Clearly he does not want the British to read the text if they intercept it, so he encrypts it using a substitution cipher. Fortunately, Ben Franklin knows the permutation being used, so he can simply reverse the permutation to obtain the original message (of course, Franklin was quite clever, so perhaps he could have decrypted it without previously knowing the key).
Now suppose we are working for the Government Code and Cypher School in England back in 1776 and are given the following intercepted message to decrypt.
LWNSOZBNWVWBAYBNVBSQWVWOHWDIZWRBBNPBPOOUWRPAWXAW
PBWZWMYPOBNPBBNWJPAWWRZSLWZQJBNWIAXAWPBSALIBNXWA
BPIRYRPOIWRPQOWAIENBVBNPBPUSREBNWVWPAWOIHWOIQWAB
JPRZBNWFYAVYIBSHNPFFIRWVVBNPBBSVWXYAWBNWVWAIENBV
ESDWARUWRBVPAWIRVBIBYBWZPUSREUWRZWAIDIREBNWIATYV
BFSLWAVHASUBNWXSRVWRBSHBNWESDWARWZBNPBLNWRWDWAPR
JHSAUSHESDWARUWRBQWXSUWVZWVBAYXBIDWSHBNWVWWRZVIB
IVBNWAIENBSHBNWFWSFOWBSPOBWASABSPQSOIVNIBPRZBSIR
VBIBYBWRWLESDWARUWRBOPJIREIBVHSYRZPBISRSRVYXNFAI
RXIFOWVPRZSAEPRIKIREIBVFSLWAVIRVYXNHSAUPVBSVWWUU
SVBOICWOJBSWHHWXBBNWIAVPHWBJPRZNPFFIRWVV
A frequency count yields the following (there are 520 letters in the text):
| W | B | R | S | I | V | A | P | N | O | |
| 76 | 64 | 39 | 36 | 36 | 35 | 34 | 32 | 30 | 16 |
The approximate frequencies of letters in English were given in Section 2.3. We repeat some of the data here in Table 2.2. This allows us to guess with reasonable confidence that represents (though is another possibility). But what about the other letters? We can guess that , , with maybe an exception or two, are probably the same as in some order. But a simple frequency count is not enough to decide which is which. What we need to do now is look at digrams, or pairs of letters. We organize our results in Table 2.3 (we only use the most frequent letters here, though it would be better to include all).
The entry 1 in the row and column means that the combination appears 1 time in the text. The entry 14 in the row and column means that appears 14 times.
We have already decided that , but if we had extended the table to include low-frequency letters, we would see that contacts many of these letters, too, which is another characteristic of . This helps to confirm our guess.
The vowels tend to avoid each other. If we look at the row, we see that does not precede very often. But a look at the column shows that follows fairly often. So we suspect that is not one of and are out because they would require , , or to precede quite often, which is unlikely. Continuing, we see that the most likely possibilities for are in some order.
The letter has the property that around 80% of the letters that precede it are vowels. Since we already have identified as vowels, we see that and are the most likely candidates. We’ll have to wait to see which is correct.
The letter often appears before and rarely after it. This tells us that .
The most common digram is . Therefore, .
Among the frequent letters, and remain, and they should equal and one of . Since pairs more with vowels and pairs more with consonants, we see that must be and is represented by either or .
The combination should appear more than , and is more frequent than , so our guess is that and .
We can continue the analysis and determine that (note that is much more common than ), , and are the most likely choices. We have therefore determined reasonable guesses for 382 of the 520 characters in the text:
| L | W | N | S | O | Z | B | N | W | V | W | B | A | Y | B | N | V | B | S |
| e | h | o | t | h | e | s | e | t | r | t | h | s | t | o | ||||
| Q | W | V | W | O | H | W | D | I | Z | W | R | B | B | N | P | B | P | |
| e | s | e | e | i | e | n | t | t | h | a | t | a |
At this point, knowledge of the language, middle-level frequencies (), and educated guesses can be used to fill in the remaining letters. For example, in the first line a good guess is that since then the word appears. Of course, there is a lot of guesswork, and various hypotheses need to be tested until one works.
Since the preceding should give the spirit of the method, we skip the remaining details. The decrypted message, with spaces (but not punctuation) added, is as follows (the text is from the middle of the Declaration of Independence):
we hold these truths to be self evident that all men are created equal that they are endowed by their creator with certain unalienable rights that among these are life liberty and the pursuit of happiness that to secure these rights governments are instituted among men deriving their just powers from the consent of the governed that whenever any form of government becomes destructive of these ends it is the right of the people to alter or to abolish it and to institute new government laying its foundation on such principles and organizing its powers in such form as to seem most likely to effect their safety and happiness
Cryptography has appeared in many places in literature, for example, in the works of Edgar Allen Poe (The Gold Bug), William Thackeray (The History of Henry Esmond), Jules Verne (Voyage to the Center of the Earth), and Agatha Christie (The Four Suspects).
Here we give a summary of an enjoyable tale by Arthur Conan Doyle, in which Sherlock Holmes displays his usual cleverness, this time by breaking a cipher system. We cannot do the story justice here, so we urge the reader to read The Adventure of the Dancing Men in its entirety. The following is a cryptic, and cryptographic, summary of the plot.
Mr. Hilton Cubitt, who has recently married the former Elsie Patrick, mails Sherlock Holmes a letter. In it is a piece of paper with dancing stick figures that he found in his garden at Riding Thorpe Manor:
Two weeks later, Cubitt finds another series of figures written in chalk on his toolhouse door:
Two mornings later another sequence appears:
Three days later, another message appears:
Cubitt gives copies of all of these to Holmes, who spends the next two days making many calculations. Suddenly, Holmes jumps from his chair, clearly having made a breakthrough. He quickly sends a long telegram to someone and then waits, telling Watson that they will probably be going to visit Cubitt the next day. But two days pass with no reply to the telegram, and then a letter arrives from Cubitt with yet another message:
Holmes studies it and says they need to travel to Riding Thorpe Manor as soon as possible. A short time later, a reply to Holmes’s telegram arrives, and Holmes indicates that the matter has become even more urgent. When Holmes and Watson arrive at Cubitt’s house the next day, they find the police already there. Cubitt has been shot dead. His wife, Elsie, has also been shot and is in critical condition (although she survives). Holmes asks several questions and then has someone deliver a note to a Mr. Abe Slaney at nearby Elrige’s Farm. Holmes then explains to Watson and the police how he decrypted the messages. First, he guessed that the flags on some of the figures indicated the ends of words. He then noticed that the most common figure was
so it was likely . This gave the fourth message as –E–E–. The possibilities LEVER, NEVER, SEVER came to mind, but since the message was probably a one word reply to a previous message, Holmes guessed it was NEVER. Next, Holmes observed that
had the form E– – –E, which could be ELSIE. The third message was therefore – – – E ELSIE. Holmes tried several combinations, finally settling on COME ELSIE as the only viable possibility. The first message therefore was – M –ERE – – E SL– NE–. Holmes guessed that the first letter was and the third letter as , which gave the message as AM HERE A–E SLANE–. It was reasonable to complete this to AM HERE ABE SLANEY. The second message then was A– ELRI–ES. Of course, Holmes correctly guessed that this must be stating where Slaney was staying. The only letters that seemed reasonable completed the phrase to AT ELRIGES. It was after decrypting these two messages that Holmes sent a telegram to a friend at the New York Police Bureau, who sent back the reply that Abe Slaney was “the most dangerous crook in Chicago.” When the final message arrived, Holmes decrypted it to ELSIE –RE–ARE TO MEET THY GO–. Since he recognized the missing letters as respectively, Holmes became very concerned and that’s why he decided to make the trip to Riding Thorpe Manor.
When Holmes finishes this explanation, the police urge that they go to Elrige’s and arrest Slaney immediately. However, Holmes suggests that is unnecessary and that Slaney will arrive shortly. Sure enough, Slaney soon appears and is handcuffed by the police. While waiting to be taken away, he confesses to the shooting (it was somewhat in self-defense, he claims) and says that the writing was invented by Elsie Patrick’s father for use by his gang, the Joint, in Chicago. Slaney was engaged to be married to Elsie, but she escaped from the world of gangsters and fled to London. Slaney finally traced her location and sent the secret messages. But why did Slaney walk into the trap that Holmes set? Holmes shows the message he wrote:
From the letters already deduced, we see that this says COME HERE AT ONCE. Slaney was sure this message must have been from Elsie since he was certain no one outside of the Joint could write such messages. Therefore, he made the visit that led to his capture.
What Holmes did was solve a simple substitution cipher, though he did this with very little data. As with most such ciphers, both frequency analysis and a knowledge of the language are very useful. A little luck is nice, too, both in the form of lucky guesses and in the distribution of letters. Note how overwhelmingly was the most common letter. In fact, it appeared 11 times among the 38 characters in the first four messages. This gave Holmes a good start. If Elsie had been Carol and Abe Slaney had been John Smith, the decryption would probably have been more difficult.
Authentication is an important issue in cryptography. If Eve breaks Alice’s cryptosystem, then Eve can often masquerade as Alice in communications with Bob. Safeguards against this are important. The judges gave Abe Slaney many years to think about this issue.
The alert reader might have noticed that we cheated a little when decrypting the messages. The same symbol represents the in NEVER and the Ps in PREPARE. This is presumably due to a misprint and has occurred in every printed version of the work, starting with the story’s first publication back in 1903. In the original text, the in NEVER is written as the in , but this is corrected in later editions (however, in some later editions, the first in the message Holmes wrote is given an extra arm and therefore looks like the ). If these mistakes had been in the text that Holmes was working with, he would have had a very difficult time decrypting and would have rightly concluded that the Joint needed to use error correction techniques in their transmissions. In fact, some type of error correction should be used in conjunction with almost every cryptographic protocol.
The Playfair and ADFGX ciphers were used in World War I by the British and the Germans, respectively. By modern standards, they are fairly weak systems, but they took real effort to break at the time.
The Playfair system was invented around 1854 by Sir Charles Wheatstone, who named it after his friend, the Baron Playfair of St. Andrews, who worked to convince the government to use it. In addition to being used in World War I, it was used by the British forces in the Boer War.
The key is a word, for example, playfair. The repeated letters are removed, to obtain playfir, and the remaining letters are used to start a matrix. The remaining spaces in the matrix are filled in with the remaining letters in the alphabet, with and being treated as one letter:
Suppose the plaintext is meet at the schoolhouse. Remove spaces and divide the text into groups of two letters. If there is a doubled letter appearing as a group, insert an and regroup. Add an extra at the end to complete the last group, if necessary. Our plaintext becomes
Now use the matrix to encrypt each two-letter group by the following scheme:
If the two letters are not in the same row or column, replace each letter by the letter that is in its row and is in the column of the other letter. For example, et becomes , since is in the same row as and the same column as , and is in the same row as and the same column as .
If the two letters are in the same row, replace each letter with the letter immediately to its right, with the matrix wrapping around from the last column to the first. For example, me becomes .
If the two letters are in the same column, replace each letter with the letter immediately below it, with the matrix wrapping around from the last row to the first. For example, becomes .
The ciphertext in our example is
To decrypt, reverse the procedure.
The system succumbs to a frequency attack since the frequencies of the various digrams (two-letter combinations) in English have been tabulated. Of course, we only have to look for the most common digrams; they should correspond to the most common digrams in English: , , , , , , . Moreover, a slight modification yields results more quickly. For example, both of the digrams and are very common. If the pairs and are common in the ciphertext, then a good guess is that , , , form the corners of a rectangle in the matrix. Another weakness is that each plaintext letter has only five possible corresponding ciphertext letters. Also, unless the keyword is long, the last few rows of the matrix are predictable. Observations such as these allow the system to be broken with a ciphertext-only attack. For more on its cryptanalysis, see [Gaines].
The ADFGX cipher proceeds as follows. Put the letters of the alphabet into a matrix. The letters and are treated as one, and the columns of the matrix are labeled with the letters . For example, the matrix could be
Each plaintext letter is replaced by the label of its row and column. For example, becomes , and becomes . Suppose the plaintext is
The result of this initial step is
So far, this is a disguised substitution cipher. The next step increases the complexity significantly. Choose a keyword, for example, Rhein. Label the columns of a matrix by the letters of the keyword and put the result of the initial step into another matrix:
Now reorder the columns so that the column labels are in alphabetic order:
Finally, the ciphertext is obtained by reading down the columns (omitting the labels) in order:
Decryption is easy, as long as you know the keyword. From the length of the keyword and the length of the ciphertext, the length of each column is determined. The letters are placed into columns, which are reordered to match the keyword. The original matrix is then used to recover the plaintext.
The initial matrix and the keyword were changed frequently, making cryptanalysis more difficult, since there was only a limited amount of ciphertext available for any combination. However, the system was successfully attacked by the French cryptanalyst Georges Painvin and the Bureau du Chiffre, who were able to decrypt a substantial number of messages.
Here is one technique that was used. Suppose two different ciphertexts intercepted at approximately the same time agree for the first several characters. A reasonable guess is that the two plaintexts agree for several words. That means that the top few entries of the columns for one are the same as for the other. Search through the ciphertexts and find other places where they agree. These possibly represent the beginnings of the columns. If this is correct, we know the column lengths. Divide the ciphertexts into columns using these lengths. For the first ciphertext, some columns will have one length and others will be one longer. The longer ones represent columns that should be near the beginning; the other columns should be near the end. Repeat for the second ciphertext. If a column is long for both ciphertexts, it is very near the beginning. If it is long for one ciphertext and not for the other, it goes in the middle. If it is short for both, it is near the end. At this point, try the various orderings of the columns, subject to these restrictions. Each ordering corresponds to a potential substitution cipher. Use frequency analysis to try to solve these. One should yield the plaintext, and the initial encryption matrix.
The letters were chosen because their symbols in Morse code (, , , , ) were not easily confused. This was to avoid transmission errors, and represents one of the early attempts to combine error correction with cryptography. Eventually, the cipher was replaced by the cipher, which used a initial matrix. This allowed all 26 letters plus 10 digits to be used.
For more on the cryptanalysis of the ADFGX cipher, see [Kahn].
Mechanical encryption devices known as rotor machines were developed in the 1920s by several people. The best known was designed by Arthur Scherbius and became the famous Enigma machine used by the Germans in World War II.
It was believed to be very secure and several attempts at breaking the system ended in failure. However, a group of three Polish cryptologists, Marian Rejewski, Henryk Zygalski, and Jerzy Różycki, succeeded in breaking early versions of Enigma during the 1930s. Their techniques were passed to the British in 1939, two months before Germany invaded Poland. The British extended the Polish techniques and successfully decrypted German messages throughout World War II.
The fact that Enigma had been broken remained a secret for almost 30 years after the end of the war, partly because the British had sold captured Enigma machines to former colonies and didn’t want them to know that the system had been broken.
In the following, we give a brief description of Enigma and then describe an attack developed by Rejewski. For more details, see for example [Kozaczuk], which contains appendices by Rejeweski giving details of attacks on Enigma.
We give a basic schematic diagram of the machine in Figure 2.1. For more details, we urge the reader to visit some of the many websites that can be found on the Internet that give pictures of actual Enigma machines and extensive diagrams of the internal workings of these machines. There are also several online Enigma simulators. Try one of them to get a better understanding of how Enigma works.
are the rotors. On one side of each rotor are 26 fixed electrical contacts, arranged in a circle. On the other side are 26 spring-loaded contacts, again arranged in a circle so as to touch the fixed contacts of the adjacent rotor. Inside each rotor, the fixed contacts are connected to the spring-loaded contacts in a somewhat random manner. These connections are different in each rotor. Each rotor has 26 possible initial settings.
is the reversing drum. It has 26 spring-loaded contacts, connected in pairs.
is the keyboard and is the same as a typewriter keyboard.
is the plugboard. It has approximately six pairs of plugs that can be used to interchange six pairs of letters.
When a key is pressed, the first rotor turns 1/26 of a turn. Then, starting from the key, electricity passes through , then through the rotors . When it reaches the reversing drum , it is sent back along a different path through , then through . At this point, the electricity lights a bulb corresponding to a letter on the keyboard, which is the letter of the ciphertext.
Since the rotor rotates before each encryption, this is much more complicated than a substitution cipher. Moreover, the rotors and also rotate, but much less often, just like the wheels on a mechanical odometer.
Decryption uses exactly the same method. Suppose a sender and receiver have identical machines, both set to the same initial positions. The sender encrypts the message by typing it on the keyboard and recording the sequence of letters indicated by the lamps. This ciphertext is then sent to the receiver, who types the ciphertext into the machine. The sequence of letters appearing in the lamps is the original message. This can be seen as follows. Lamp “a” and key “a” are attached to a wire coming out of the plugboard. Lamp “h” and key “h” are attached to another wire coming out of the plugboard. If the key “a” is pressed and the lamp “h” lights up, then the electrical path through the machine is also connecting lamp “a” to key “h”. Therefore, if the “h” key were pressed instead, then the “a” key would light.
Similar reasoning shows that no letter is ever encrypted as itself. This might appear to be a good idea, but actually it is a weakness since it allows a cryptanalyst to discard many possibilities at the start. See Chapter 14.
The security of the system rests on the keeping secret the initial settings of the rotors, the setting of the plugs on the plugboard, and the internal wiring of the rotors and reversing drum. The settings of the rotors and the plugboard are changed periodically (for example, daily).
We’ll assume the internal wiring of the rotors is known. This would be the case if a machine were captured, for example. However, there are ways to deduce this information, given enough ciphertext, and this is what was actually done in some cases.
How many combinations of settings are there? There are 26 initial settings for each of the three rotors. This gives possibilities. There are six possible orderings of the three rotors. This yields possible ways to initialize the rotors. In later versions of Enigma, there were five rotors available, and each day three were chosen. This made 60 possible orderings of the rotors and therefore 1054560 ways to initialize the rotors.
On the plugboard, there are 100391791500 ways of interchanging six pairs of letters.
In all, there seem to be too many possible initializations of the machine to have any hope of breaking the system. Techniques such as frequency analysis fail since the rotations of the rotors change the substitution for each character of the message.
So, how was Enigma attacked? We don’t give the whole attack here, but rather show how the initial settings of the rotors were determined in the years around 1937. This attack depended on a weakness in the protocol being used at that time, but it gives the general flavor of how the attacks proceeded in other situations.
Each Enigma operator was given a codebook containing the daily settings to be used for the next month. However, if these settings had been used without modification, then each message sent during a given day would have had its first letter encrypted by the same substitution cipher. The rotor would then have turned and the second letter of each text would have corresponded to another substitution cipher, and this substitution would have been the same for all messages for that day. A frequency analysis on the first letter of each intercepted message during a day would probably allow a decryption of the first letter of each text. A second frequency analysis would decrypt the second letters. Similarly, the remaining letters of the ciphertexts (except for the ends of the longest few ciphertexts) could be decrypted.
To avoid this problem, for each message the operator chose a message key consisting of a sequence of three letters, for example, . He then used the daily setting from the codebook to encrypt this message key. But since radio communications were prone to error, he typed in twice, therefore encrypting to obtain a string of six letters. The rotors were then set to positions , , and and the encryption of the actual message began. So the first six letters of the transmitted message were the encrypted message key, and the remainder was the ciphertext. Since each message used a different key, frequency analysis didn’t work.
The receiver simply used the daily settings from the codebook to decrypt the first six letters of the message. He then reset the rotors to the positions indicated by the decrypted message key and proceeded to decrypt the message.
The duplication of the key was a great aid to the cryptanalysts. Suppose that on some day you intercept several messages, and among them are three that have the following initial six letters:
dmqvbn
vonpuy
pucfmq
All of these were encrypted with the same daily settings from the codebook. The first encryption corresponds to a permutation of the 26 letters; let’s call this permutation . Before the second letter is encrypted, a rotor turns, so the second letter uses another permutation; call it . Similarly, there are permutations , , , for the remaining four letters. The strategy is to look at the products , , and .
We need a few conventions and facts about permutations. When we write for two permutations and , we mean that we apply the permutation then (some books use the reverse ordering). The permutation that maps to , to , and to will be denoted as the 3-cycle . A similar notation will be used for cycles of other lengths. For example, is the permutation that switches and . A permutation can be written as a product of cycles. For example, the permutation
is the permutation that maps to , to , to , to , etc., and fixes and . If the cycles are disjoint (meaning that no two cycles have letters in common), then this decomposition into cycles is unique.
Let’s look back at the intercepted texts. We don’t know the letters of any of the three message keys, but let’s call the first message key . Therefore, encrypts to . We know that permutation sends to . Also, the fourth permutation sends to . But we know more. Because of the internal wiring of the machine, actually interchanges and and interchanges and . Therefore, the product of the permutations, , sends to (namely, sends to and then sends to ). The unknown has been eliminated. Similarly, the second intercepted text tells us that sends to , and the third tells us that sends to . We have therefore determined that
In the same way, the second and fifth letters of the three messages tell us that
and the third and sixth letters tell us that
With enough data, we can deduce the decompositions of , , and into products of cycles. For example, we might have
This information depends only on the daily settings of the plugboard and the rotors, not on the message key. Therefore, it relates to every machine used on a given day.
Let’s look at the effect of the plugboard. It introduces a permutation at the beginning of the process and then adds the inverse permutation at the end. We need another fact about permutations: Suppose we take a permutation and another permutation of the form for some permutation (where denotes the inverse permutation of ; in our case, ) and decompose each into cycles. They will usually not have the same cycles, but the lengths of the cycles in the decompositions will be the same. For example, has cycles of length 10, 10, 2, 2, 1, 1. If we decompose into cycles for any permutation , we will again get cycles of lengths 10, 10, 2, 2, 1, 1. Therefore, if the plugboard settings are changed, but the initial positions of the rotors remain the same, then the cycle lengths remain unchanged.
You might have noticed that in the decomposition of , , and into cycles, each cycle length appears an even number of times. This is a general phenomenon. For an explanation, see Appendix E of the aforementioned book by Kozaczuk.
Rejewski and his colleagues compiled a catalog of all 105456 initial settings of the rotors along with the set of cycle lengths for the corresponding three permutations , , . In this way, they could take the ciphertexts for a given day, deduce the cycle lengths, and find the small number of corresponding initial settings for the rotors. Each of these substitutions could be tried individually. The effect of the plugboard (when the correct setting was used) was then merely a substitution cipher, which was easily broken. This method worked until September 1938, when a modified method of transmitting message keys was adopted. Modifications of the above technique were again used to decrypt the messages. The process was also mechanized, using machines called “bombes” to find daily keys, each in around two hours.
These techniques were extended by the British at Bletchley Park during World War II and included building more sophisticated “bombes.” These machines, designed by Alan Turing, are often considered to have been the first electronic computers.
Caesar wants to arrange a secret meeting with Marc Antony, either at the Tiber (the river) or at the Coliseum (the arena). He sends the ciphertext . However, Antony does not know the key, so he tries all possibilities. Where will he meet Caesar? (Hint: This is a trick question.)
Show that each of the ciphertexts and , which were obtained by shift ciphers from one-word plaintexts, has two different decryptions.
The ciphertext was encrypted using the affine function mod 26. Find the plaintext.
The ciphertext was obtained by affine encryption with the function mod 26. Find the plaintext.
Encrypt howareyou using the affine function . What is the decryption function? Check that it works.
You encrypt messages using the affine function mod 26. Decrypt the ciphertext .
A child has learned about affine ciphers. The parent says NONONO. The child responds with hahaha, and quickly claims that this is a decryption of the parent’s message. The parent asks for the encryption function. What answer should the child give?
You try to encrypt messages using the affine cipher mod 26. Find two letters that encrypt to the same ciphertext letter.
The following ciphertext was encrypted by an affine cipher mod 26:
The plaintext starts ha. Decrypt the message.
Alice encrypts a message using the affine function for some . The ciphertext is FAP. The third letter of the plaintext is . Find the plaintext.
Suppose you encrypt using an affine cipher, then encrypt the encryption using another affine cipher (both are working mod 26). Is there any advantage to doing this, rather than using a single affine cipher? Why or why not?
Find all affine ciphers mod 26 for which the decryption function equals the encryption function. (There are 28 of them.)
Suppose we work mod 27 instead of mod 26 for affine ciphers. How many keys are possible? What if we work mod 29?
The ciphertext XVASDW was encrypted using an affine function mod 26. Determine and decrypt the message.
Suppose that you want to encrypt a message using an affine cipher. You let , but you also include . Therefore, you use for your encryption function, for some integers and .
Show that there are exactly eight possible choices for the integer (that is, there are only eight choices of (with ) that allow you to decrypt).
Suppose you try to use . Find two plaintext letters that encrypt to the same ciphertext letter.
You are trying to encrypt using the affine function mod 26.
Encrypt HATE and LOVE. Why is decryption impossible?
Find two different three-letter words that encrypt to WWW.
Challenge: Find a word (that is legal in various word games) that encrypts to JJJ. (There are four such words.)
You want to carry out an affine encryption using the function , but you have . Show that if , then . This shows that you will not be able to decrypt uniquely in this case.
You encrypt the message (there are 10 ’s) using the following cryptosystems:
affine cipher
Vigenère cipher with key length 7
Eve intercepts the ciphertexts. She knows the encryption methods (including key size) and knows what your plaintext is (she can hear you snoring). For each of the two cryptosystems, determine whether or not Eve can use this information to determine the key. Explain your answer.
Suppose there is a language that has only the letters and . The frequency of the letter is .1 and the frequency of is .9. A message is encrypted using a Vigenère cipher (working mod 2 instead of mod 26). The ciphertext is BABABAAABA. The key length is 1, 2, or 3.
Show that the key length is probably 2.
Using the information on the frequencies of the letters, determine the key and decrypt the message.
Suppose you have a language with only the three letters , and they occur with frequencies , , and , respectively. The ciphertext BCCCBCBCBC was encrypted by the Vigenère method (shifts are mod 3, not mod 26). Find the plaintext (Note: The plaintext is not a meaningful English message.)
Suppose you have a language with only the three letters , , , and they occur with frequencies .7, .2, .1, respectively. The following ciphertext was encrypted by the Vigenère method (shifts are mod 3 instead of mod 26, of course):
Suppose you are told that the key length is 1, 2, or 3. Show that the key length is probably 2, and determine the most probable key.
Victor designs a cryptosystem (called “Vector”) as follows: He writes the letters in the plaintext as numbers mod 26 (with , etc.) and groups them five at a time into five-dimensional vectors. His key is a five-dimensional vector. The encryption is adding the key vector mod 26 to each plaintext vector (so this is a shift cipher with vectors in place of individual letters).
Describe a chosen plaintext attack on this system. Give the explicit plaintext used and how you get the key from the information you obtain.
Victor’s system is not new. It is the same as what well-known system?
If v and w are two vectors in -dimensional space, , where is the angle between the two vectors (measured in the two-dimensional plane spanned by the two vectors), and denotes the length of v. Use this fact to show that, in the notation of Section 2.3, the dot product is largest when .
Alice uses an improvement of the Vigenère cipher. She chooses five affine functions
and she uses these to encrypt in the style of Vigenère. Namely, she encrypts the first plaintext letter using , the second letter using , etc.
What condition do need to satisfy for Bob (who knows the key) to able to decrypt the message?
Describe how to do a chosen plaintext attack to find the key. Give the plaintext explicitly and explain how it yields the key. (Note: the solution has nothing to do with frequencies of letters.)
Alice is sending a message to Bob using one of the following cryptosystems. In fact, Alice is bored and her plaintext consists of the letter repeated a few hundred times. Eve knows what system is being used, but not the key, and intercepts the ciphertext. For systems (a), (b), and (c), state how Eve will recognize that the plaintext is one repeated letter and decide whether or not Eve can deduce the letter and the key.
Shift cipher
Affine cipher
Vigenère cipher
The operator of a Vigenère encryption machine is bored and encrypts a plaintext consisting of the same letter of the alphabet repeated several hundred times. The key is a seven-letter English word. Eve knows that the key is a word but does not yet know its length.
What property of the ciphertext will make Eve suspect that the plaintext is one repeated letter and will allow her to guess that the key length is seven?
Once Eve guesses that the plaintext is one repeated letter, how can she determine the key? (Hint: You need the fact that no English word of length seven is a shift of another English word.)
Suppose Eve doesn’t notice the property needed in part (a), and therefore uses the method of displacing then counting matches for finding the length of the key. What will the number of matches be for the various displacements? In other words, why will the length of the key become very obvious by this method?
Use the Playfair cipher with the keyword Cryptography to encrypt
The ciphertext
was encrypted using the Playfair cipher with keyword Archimedes. Find the plaintext.
Encrypt the plaintext secret using the ADFGX cipher with the matrix in Section 2.6 and the keyword spy.
The ciphertext AAAAFXGGFAFFGGFGXAFGADGGAXXXFX was encrypted using the ADFGX cipher with the matrix in Section 2.6 and the keyword broken. Find the plaintext.
Suppose Alice and Bob are using a cryptosystem with a 128-bit key, so there are possible keys. Eve is trying a brute-force attack on the system.
Suppose it takes 1 day for Eve to try possible keys. At this rate, how long will it take for Eve to try all keys? (Hint: The answer is not 2 days.)
Suppose Alice waits 10 years and then buys a computer that is 100 times faster than the one she now owns (so it takes only 1/100 of a day, which is 864 seconds, to try keys). Will she finish trying all keys before or after what she does in part (a)? (Note: This is a case where Aesop’s Fable about the Tortoise and the Hare has a different ending.)
In the mid-1980s, a recruiting advertisement for NSA had 1 followed by one hundred 0s at the top. The text began “You’re looking at a ‘googol.’ Ten raised to the 100th power. One followed by 100 zeroes. Counting 24 hours a day, you would need 120 years to reach a googol. Two lifetimes. It’s a number that’s impossible to grasp. A number beyond our imagination.”
How many numbers would you have to count each second in order to reach a googol in 120 years? (This problem is not related to the cryptosystems in this chapter. It is included to show how big 100-digit numbers are from a computational viewpoint. Regarding the ad, one guess is that the advertising firm assumed that the time it took to factor a 100-digit number back then was the same as the time it took to count to a googol.)
The following ciphertext was encrypted by a shift cipher:
ycvejqwvhqtdtwvwu
Decrypt. (The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name ycve.)
The following ciphertext was the output of a shift cipher:
lcllewljazlnnzmvyiylhrmhza
By performing a frequency count, guess the key used in the cipher. Use the computer to test your hypothesis. What is the decrypted plaintext? (The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name lcll.)
The following was encrypted by an affine cipher: jidfbidzzteztxjsichfoihuszzsfsaichbipahsibdhu hzsichjujgfabbczggjsvzubehhgjsv. Decrypt it. (This quote (NYTimes, 12/7/2014) is by Mark Wahlberg from when he was observing college classes in order to play a professor in “The Gambler." The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name jidf.) (Hint: The command “frequency” could be useful. The plaintext has 9 e’s, 3 d’s, and 3 w’s.)
The following ciphertext was encrypted by an affine cipher:
edsgickxhuklzveqzvkxwkzukcvuh
The first two letters of the plaintext are if. Decrypt. (The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name edsg.)
The following ciphertext was encrypted by an affine cipher using the function for some :
tcabtiqmfheqqmrmvmtmaq
Decrypt. (The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name tcab.)
Experiment with the affine cipher for values of . In particular, determine whether or not these encryptions are the same as ones obtained with .
In this problem you are to get your hands dirty doing some programming. Write some code that creates a new alphabet . For example, this alphabet could correspond to the four nucleotides adenine, cytosine, guanine, and thymine, which are the basic building blocks of DNA and RNA codes. Associate the letters with the numbers , respectively.
Using the shift cipher with a shift of , encrypt the following sequence of nucleotides, which is taken from the beginning of the thirteenth human chromosome:
GAATTCGCGGCCGCAATTAACCCTCACTAAAGGGATCT
CTAGAACT.
Write a program that performs affine ciphers on the nucleotide alphabet. What restrictions are there on the affine cipher?
The following was encrypted using by the Vigenère method using a key of length at most 6. Decrypt it and decide what is unusual about the plaintext. How did this affect the results?
hdsfgvmkoowafweetcmfthskucaqbilgjofmaqlgspvatvxqbiryscpcfrmvswrvnqlszdmgaoqsakmlupsqforvtwvdfcjzvgsoaoqsacjkbrsevbelvbksarlscdcaarmnvrysywxqgvellcyluwwveoafgclazowafojdlhssfiksepsoywxafowlbfcsocylngqsyzxgjbmlvgrggokgfgmhlmejabsjvgmlnrvqzcrggcrghgeupcyfgtydycjkhqluhgxgzovqswpdvbwsffsenbxapasgazmyuhgsfhmftayjxmwznrsofrsoaopgauaaarmftqsmahvqecev
(The ciphertext is stored under the name hdsf in the downloadable computer files (bit.ly/2JbcS6p). The plaintext is from Gadsby by Ernest Vincent Wright.)
The following was encrypted by the Vigenère method. Find the plaintext.
ocwyikoooniwugpmxwktzdwgtssayjzwyemdlbnqaaavsuwdvbrflauplooubfgqhgcscmgzlatoedcsdeidpbhtmuovpiekifpimfnoamvlpqfxejsmxmpgkccaykwfzpyuavtelwhrhmwkbbvgtguvtefjlodfefkvpxsgrsorvgtajbsauhzrzalkwuowhgedefnswmrciwcpaaavogpdnfpktdbalsisurlnpsjyeatcuceesohhdarkhwotikbroqrdfmzghgucebvgwcdqxgpbgqwlpbdaylooqdmuhbdqgmyweuik
(The ciphertext is stored under the name ocwy in the downloadable computer files (bit.ly/2JbcS6p). The plaintext is from The Adventure of the Dancing Men by Sir Arthur Conan Doyle.)
The following was encrypted by the Vigenère method. Decrypt it. (The ciphertext is stored under the name xkju in the downloadable computer files (bit.ly/2JbcS6p).)
xkjurowmllpxwznpimbvbqjcnowxpcchhvvfvsllfvxhazityxohulxqojaxelxzxmyjaqfstsrulhhucdskbxknjqidallpqslluhiaqfpbpcidsvcihwhwewthbtxrljnrsncihuvffuxvoukjljswmaqfvjwjsdyljogjxdboxajultucpzmpliwmlubzxvoodybafdskxgqfadshxnxehsaruojaqfpfkndhsaafvulluwtaqfrupwjrszxgpfutjqiynrxnyntwmhcukjfbirzsmehhsjshyonddzzntzmplilrwnmwmlvuryonthuhabwnvw
In modern cryptographic systems, the messages are represented by numerical values prior to being encrypted and transmitted. The encryption processes are mathematical operations that turn the input numerical values into output numerical values. Building, analyzing, and attacking these cryptosystems requires mathematical tools. The most important of these is number theory, especially the theory of congruences. This chapter presents the basic tools needed for the rest of the book. More advanced topics such as factoring, discrete logarithms, and elliptic curves, will be treated in later chapters (Chapters 9, 10, and 21, respectively).
Number theory is concerned with the properties of the integers. One of the most important is divisibility.
Let and be integers with We say that divides if there is an integer such that This is denoted by Another way to express this is that is a multiple of
(does not divide).
The following properties of divisibility are useful.
Let represent integers.
For every and Also, for every
If and then
If and then for all integers and
Proof. Since we may take in the definition to obtain Since we take to prove Since we have This proves (1). In (2), there exist and such that and Therefore, so For (3), write and Then so
For example, take in part (2). Then simply means that is even. The statement in the proposition says that which is a multiple of the even number must also be even (that is, a multiple of ).
A number whose positive divisors are only 1 and itself is called a prime number. The first few primes are An integer that is not prime is called composite, which means that must be expressible as a product of integers with A fact, known already to Euclid, is that there are infinitely many prime numbers. A more precise statement is the following, proved in 1896.
Let be the number of primes less than Then
in the sense that the ratio as
We won’t prove this here; its proof would lead us too far away from our cryptographic goals. In various applications, we’ll need large primes, say of around 300 digits. We can estimate the number of 300-digit primes as follows:
So there are certainly enough such primes. Later, we’ll discuss how to find them.
Prime numbers are the building blocks of the integers. Every positive integer has a unique representation as a product of prime numbers raised to different powers. For example, 504 and 1125 have the following factorizations:
Moreover, these factorizations are unique, except for reordering the factors. For example, if we factor 504 into primes, then we will always obtain three factors of 2, two factors of 3, and one factor of 7. Anyone who obtains the prime 41 as a factor has made a mistake.
Every positive integer is a product of primes. This factorization into primes is unique, up to reordering the factors.
Proof. There is a small technicality that must be dealt with before we begin. When dealing with products, it is convenient to make the convention that an empty product equals 1. This is similar to the convention that Therefore, the positive integer 1 is a product of primes, namely the empty product. Also, each prime is regarded as a one-factor product of primes.
Suppose there exist positive integers that are not products of primes. Let be the smallest such integer. Then cannot be 1 ( the empty product), or a prime ( a one-factor product), so must be composite. Therefore, with Since is the smallest positive integer that is not a product of primes, both and are products of primes. But a product of primes times a product of primes is a product of primes, so is a product of primes. This contradiction shows that the set of integers that are not products of primes must be the empty set. Therefore, every positive integer is a product of primes.
The uniqueness of the factorization is more difficult to prove. We need the following very important property of primes.
If is a prime and divides a product of integers then either or More generally, if a prime divides a product then must divide one of the factors
For example, when this says that if a product of two integers is even then one of the two integers must be even. The proof of the lemma will be given at the end of the next section, after we discuss the Extended Euclidean algorithm.
Continuing with the proof of the theorem, suppose that an integer can be written as a product of primes in two different ways:
where and are primes, and the exponents and are nonzero. If a prime occurs in both factorizations, divide both sides by it to obtain a shorter relation. Continuing in this way, we may assume that none of the primes occur among the ’s. Take a prime that occurs on the left side, say Since divides which equals the lemma says that must divide one of the factors Since is prime, This contradicts the assumption that does not occur among the ’s. Therefore, an integer cannot have two distinct factorizations, as claimed.
The greatest common divisor of and is the largest positive integer dividing both and and is denoted by either or by In this book, we use the first notation. To avoid technicalities, we always assume implicitly that at least one of and is nonzero.
We say that and are relatively prime if There are two standard ways for finding the gcd:
If you can factor and into primes, do so. For each prime number, look at the powers that it appears in the factorizations of and Take the smaller of the two. Put these prime powers together to get the gcd. This is easiest to understand by examples:
Note that if a prime does not appear in a factorization, then it cannot appear in the gcd.
Suppose and are large numbers, so it might not be easy to factor them. The gcd can be calculated by a procedure known as the Euclidean algorithm. It goes back to what everyone learned in grade school: division with remainder. Before giving a formal description of the algorithm, let’s see some examples.
Compute gcd(482, 1180).
SOLUTION
Divide 482 into 1180. The quotient is 2 and the remainder is 216. Now divide the remainder 216 into 482. The quotient is 2 and the remainder is 50. Divide the remainder 50 into the previous remainder 216. The quotient is 4 and the remainder is 16. Continue this process of dividing the most recent remainder into the previous one. The last nonzero remainder is the gcd, which is 2 in this case:
Notice how the numbers are shifted:
Here is another example:
Therefore,
Using these examples as guidelines, we can now give a more formal description of the Euclidean algorithm. Suppose that is greater than If not, switch and The first step is to divide by hence represent in the form
If then divides and the greatest common divisor is If then continue by representing in the form
Continue in this way until the remainder is zero, giving the following sequence of steps:
The conclusion is that
There are two important aspects to this algorithm:
It does not require factorization of the numbers.
It is fast.
For a proof that it actually computes the gcd, see Exercise 59.
The Euclidean Algorithm computes greatest common divisors quickly, but also, with only slightly more work, yields a very useful fact: can be expressed as a linear combination of and That is, there exist integers and such that For example,
The Extended Euclidean Algorithm will tell us how to find and Rather than give a set of equations, we’ll show how it works with the two examples we calculated in Subsection 3.1.3.
When we computed we did the following calculation:
For the Extended Euclidean Algorithm, we’ll form a table with three columns and explain how they arise as we compute them.
We begin by forming two rows and three columns. The first entries in the rows are the original numbers we started with, namely 12345 and 11111. We will do some calculations so that we always have
where and are integers. The first two lines are trivial: and
| 12345 | 1 | 0 |
| 11111 | 0 | 1 |
The first line in our calculation tells us that We rewrite this as Using this, we compute
yielding the following:
| 12345 | 1 | 0 | |
| 11111 | 0 | 1 | |
| 1234 | 1 | 1 | (1st row) (2nd row). |
In effect, we have done the following subtraction:
Therefore, the last line tells us that
We now move to the second row of our gcd calculation. This says that which we rewrite as This tells us to compute (2nd row) (3rd row). We write this as
| 12345 | 1 | 0 | |
| 11111 | 0 | 1 | |
| 1234 | 1 | 1 | |
| 5 | 9 | 10 | (2nd row) (3rd row). |
The last line tells us that
The third row of our gcd calculation tells us that This becomes
| 12345 | 1 | 0 | |
| 11111 | 0 | 1 | |
| 1234 | 1 | 1 | |
| 5 | 9 | 10 | |
| 4 | 2215 | 2461 | (3rd row) (4th row). |
Finally, we obtain
| 12345 | 1 | 0 | |
| 11111 | 0 | 1 | |
| 1234 | 1 | 1 | |
| 5 | 9 | 10 | |
| 4 | 2215 | 2461 | |
| 1 | 2224 | 2471 | (4th row) (5th row). |
This tells us that
Notice that as we proceeded, we were doing the Euclidean Algorithm in the first column. The first entry of each row is a remainder from the gcd calculation, and the entries in the second and third columns allow us to express the number in the first column as a linear combination of 12345 and 11111. The quotients in the Euclidean Algorithm tell us what to multiply a row by before subtracting it from the previous row.
Let’s do another example using 482 and 1180 and our previous calculation that
| 1180 | 1 | 0 | |
| 482 | 0 | 1 | |
| 216 | 1 | 2 | (1st row) (2nd row) |
| 50 | 5 | (2nd row) (3rd row) | |
| 16 | 9 | (3rd row) (4th row) | |
| 2 | 71 | (4rd row) (5th row). |
The end result is
To summarize, we state the following.
Let and be integers with at least one of nonzero. There exist integers and which can be found by the Extended Euclidean Algorithm, such that
As a corollary, we deduce the lemma we needed during the proof of the uniqueness of factorization into primes.
If is a prime and divides a product of integers then either or More generally, if a prime divides a product then must divide one of the factors
Proof. First, let’s work with the case If divides we are done. Now assume We claim Since is prime, or Since the gcd cannot be Therefore, so there exist integers with Multiply by to obtain Since and we have so as claimed.
If then or If we’re done. Otherwise, We now have a shorter product. Either in which case we’re done, or divides the product of the remaining factors. Continuing in this way, we eventually find that divides one of the factors of the product.
The property of primes stated in the corollary holds only for primes. For example, if we know a product is divisible by 6, we cannot conclude that or is a multiple of 6. The problem is that and the 2 could be in while the 3 could be in as seen in the example More generally, if is any composite, then but and Therefore, the primes, and 1, are the only integers with the property of the corollary.
One of the most basic and useful notions in number theory is modular arithmetic, or congruences.
Let be integers with We say that
(read: is congruent to mod ) if is a multiple (positive or negative or zero) of
Another formulation is that if and differ by a multiple of This can be rewritten as for some integer (positive or negative).
Note: Many computer programs regard as equal to the number 7, namely, the remainder obtained when 17 is divided by 10 (often written as ). The notion of congruence we use is closely related. We have that two numbers are congruent mod if they yield the same remainders when divided by For example, because and are equal.
Congruence behaves very much like equality. In fact, the notation for congruence was intentionally chosen to resemble the notation for equality.
Let be integers with
if and only if
if and only if
If and then
Proof. In (1), means that is a multiple of which is the same as In (2), we have so In (3), if write Then so Reversing the roles of and gives the reverse implication. For (4), write and Then so
Usually, we have and we work with the integers mod denoted These may be regarded as the set with addition, subtraction, and multiplication mod If is any integer, we may divide by and obtain a remainder in this set:
(This is just division with remainder; is the quotient and is the remainder.) Then so every number is congruent mod to some integer with
Let be integers with and suppose and Then
Proof. Write and for integers and Then so The proof that is similar. For multiplication, we have so
The proposition says you can perform the usual arithmetic operations of addition, subtraction, and multiplication with congruences. You must be careful, however, when trying to perform division, as we’ll see.
If we take two numbers and want to multiply them modulo we start by multiplying them as integers. If the product is less than we stop. If the product is larger than we divide by and take the remainder. Addition and subtraction are done similarly. For example, the integers modulo have the following addition table:
A table for multiplication mod 6 is
Here is an example of how we can do algebra mod Consider the following problem: Solve
SOLUTION
There is nothing wrong with negative answers, but usually we write the final answer as an integer from 0 to when we are working mod
Division is much trickier mod than it is with rational numbers. The general rule is that you can divide by when
Let be integers with and with If then In other words, if and are relatively prime, we can divide both sides of the congruence by
Proof Since there exist integers such that Multiply by to obtain
Since is a multiple of by assumption, and is also a multiple of we find that is a multiple of This means that
Solve:
SOLUTION
The division by 2 is allowed since
Solve:
SOLUTION
Now what do we do? We want to divide by 5, but what does 7/5 mean mod 11? Note that So is the same as Now we can divide by 5 and obtain as the answer. Note that so 8 acts like 7/5.
The last example can be done another way. Since we see that 9 is the multiplicative inverse of Therefore, dividing by 5 can be accomplished by multiplying by 9. If we want to solve we multiply both sides by 9 and obtain
Suppose Let and be integers such that (they can be found using the extended Euclidean algorithm). Then so is the multiplicative inverse for
Proof. Since we see that is a multiple of
Notation: We let denote this so satisfies
The extended Euclidean algorithm is fairly efficient for computing the multiplicative inverse of by the method stated in the proposition.
Solve
SOLUTION
In Section 3.2, from the calculation of we obtained
This says that
Multiplying both sides of the original congruence by 2471 yields
In practice, this means that if we are working mod 12345 and we encounter the fraction 4/11111, we can replace it with 9884. This might seem a little strange, but think about what 4/11111 means. It’s simply a symbol to represent a quantity that, when multiplied by 11111, yields 4. When we are working mod 12345, the number 9884 also has this property since
Let’s summarize some of the discussion:
Use the extended Euclidean algorithm to find integers and such that
when
(Equivalently, you could be working mod and encounter a fraction with )
Use the extended Euclidean algorithm to find integers and such that
The solution is (equivalently, replace the fraction with ).
Occasionally we will need to solve congruences of the form when The procedure is as follows:
If does not divide there is no solution.
Assume Consider the new congruence
Note that are integers and Solve this congruence by the above procedure to obtain a solution
The solutions of the original congruence are
Solve
SOLUTION
which divides 21. Divide by 3 to obtain the new congruence A solution can be obtained by trying a few numbers, or by using the extended Euclidean algorithm. The solutions to the original congruence are
The preceding congruences contained to the first power. However, nonlinear congruences are also useful. In several places in this book, we will meet equations of the form
First, consider The solutions are as we can see by trying the values for In general, when is an odd prime, has exactly the two solutions (see Exercise 15).
Now consider If we try the numbers for we find that are solutions. For example, Therefore, a quadratic congruence for a composite modulus can have more than two solutions, in contrast to the fact that a quadratic equation with real numbers, for example, can have at most two solutions. In Section 3.4, we’ll discuss this phenomenon. In Chapters 9 (factoring), 18 (flipping coins), and 19 (identification schemes), we’ll meet applications of this fact.
In many situations, it will be convenient to work with fractions mod For example, is easier to write than (note that ). The general rule is that a fraction can be used mod if Of course, it should be remembered that really means where denotes the integer mod that satisfies But nothing will go wrong if it is treated as a fraction.
Another way to look at this is the following. The symbol “1/2” is simply a symbol with exactly one property: If you multiply 1/2 by 2, you get 1. In all calculations involving the symbol 1/2, this is the only property that is used. When we are working mod 12345, the number 6173 also has this property, since Therefore, and may be used interchangeably.
Why can’t we use fractions with arbitrary denominators? Of course, we cannot use since that would mean dividing by But even if we try to work with we run into trouble. For example, but we cannot multiply both sides by 1/2, since The problem is that Since 2 is a factor of 6, we can think of dividing by 2 as “partially dividing by 0.” In any case, it is not allowed.
In many situations, it is useful to break a congruence mod into a system of congruences mod factors of Consider the following example. Suppose we know that a number satisfies This means that we can write for some integer Rewriting 42 as we obtain which implies that Similarly, since we have Therefore,
The Chinese remainder theorem shows that this process can be reversed; namely, a system of congruences can be replaced by a single congruence under certain conditions.
Suppose Given integers and there exists exactly one solution to the simultaneous congruences
Proof. There exist integers such that Then and Let Then and so a solution exists. Suppose is another solution. Then and so is a multiple of both and
Let be integers with If an integer is a multiple of both and then is a multiple of
Proof. Let Write with integers Multiply by to obtain
To finish the proof of the theorem, let in the lemma to find that is a multiple of Therefore, This means that any two solutions to the system of congruences are congruent mod as claimed.
Solve
SOLUTION
(note: ). Since and 80 is a solution. The theorem guarantees that such a solution exists, and says that it is uniquely determined mod the product which is 105 in the present example.
How does one find the solution? One way, which works with small numbers and is to list the numbers congruent to until you find one that is congruent to For example, the numbers congruent to are
Mod 7, these are Since we want we choose 80.
For slightly larger numbers and making a list would be inefficient. However, the proof of the theorem gives a fast method for finding
Use the Extended Euclidean algorithm to find and with
Let
Solve
SOLUTION
First, we know from our calculations in Section 3.2 that
so and Therefore,
How do you use the Chinese remainder theorem? The main idea is that if you start with a congruence mod a composite number you can break it into simultaneous congruences mod each prime power factor of then recombine the resulting information to obtain an answer mod The advantage is that often it is easier to analyze congruences mod primes or mod prime powers than to work mod composite numbers.
Suppose you want to solve Note that We have
Now, has two solutions: Also, has two solutions: We can put these together in four ways:
So the solutions of are
In general, if is the product of distinct odd primes, then has solutions. This is a consequence of the following.
Let be integers with whenever Given integers there exists exactly one solution to the simultaneous congruences
For example, the theorem guarantees there is a solution to the simultaneous congruences
In fact, is the answer.
Exercise 57 gives a method for computing the number in the theorem.
Throughout this book, we will be interested in numbers of the form
In this and the next couple of sections, we discuss some properties of numbers raised to a power modulo an integer.
Suppose we want to compute If we first compute then reduce mod 789, we’ll be working with very large numbers, even though the final answer has only 3 digits. We should therefore perform each multiplication and then calculate the remainder. Calculating the consecutive powers of 2 would require that we perform the modular multiplication 1233 times. This is method is too slow to be practical, especially when the exponent becomes very large. A more efficient way is the following (all congruences are mod 789).
We start with and repeatedly square both sides to obtain the following congruences:
Since (this just means that 1234 equals 10011010010 in binary), we have
Note that we never needed to work with a number larger than
The same method works in general. If we want to compute we can do it with at most multiplications mod and we never have to work with numbers larger than This means that exponentiation can be accomplished quickly, and not much memory is needed.
This method is very useful if are 100-digit numbers. If we simply computed then reduced mod the computer’s memory would overflow: The number has more than digits, which is more digits than there are particles in the universe. However, the computation of can be accomplished in fewer than 700 steps by the present method, never using a number of more than 200 digits.
Algorithmic versions of this procedure are given in Exercise 56. For more examples, see Examples 8 and 24–30 in the Computer Appendices.
Two of the most basic results in number theory are Fermat’s and Euler’s theorems. Originally admired for their theoretical value, they have more recently proved to have important cryptographic applications and will be used repeatedly throughout this book.
If is a prime and does not divide then
Proof. Let
Consider the map defined by For example, when and the map takes a number multiplies it by 2, then reduces the result mod 7.
We need to check that if then is actually in that is, Suppose Then Since we can divide this congruence by to obtain so This contradiction means that cannot be 0, hence Now suppose there are with This means Since we can divide this congruence by to obtain We conclude that if are distinct elements of then and are distinct. Therefore,
are distinct elements of Since has only elements, these must be the elements of written in a some order. It follows that
Since for we can divide this congruence by What remains is
From this we can evaluate Write Note that when working mod 11, we are essentially working with the exponents mod 10, not mod 11. In other words, from we deduce
The example leads us to a very important fact:
Let be prime and let be integers with If then In other words, if you want to work mod you should work mod in the exponent.
Proof. Write Then
This completes the proof.
In the rest of this book, almost every time you see a congruence mod it will involve numbers that appear in exponents. The Basic Principle that was just stated shows that this translates into an overall congruence mod Do not make the (unfortunately, very common) mistake of working mod in the exponent with the hope that it will yield an overall congruence mod It doesn’t.
We can often use Fermat’s theorem to show that a number is composite, without factoring. For example, let’s show that 49 is composite. We use the technique of Section 3.5 to calculate
Since
we conclude that 49 cannot be prime (otherwise, Fermat’s theorem would require that ). Note that we showed that a factorization must exist, even though we didn’t find the factors.
Usually, if the number is prime. However, there are exceptions: is composite but We can see this as follows: Since we have Similarly, since and we can conclude that and Putting things together via the Chinese remainder theorem, we find that
Another such exception is However, these exceptions are fairly rare in practice. Therefore, if it is quite likely that is prime. Of course, if then cannot be prime.
Since can be evaluated very quickly (see Section 3.5), this gives a way to search for prime numbers. Namely, choose a starting point and successively test each odd number to see whether If fails the test, discard it and proceed to the next When an passes the test, use more sophisticated techniques (see Section 9.3) to test for primality. The advantage is that this procedure is much faster than trying to factor each especially since it eliminates many quickly. Of course, there are ways to speed up the search, for example, by first eliminating any that has small prime factors.
For example, suppose we want to find a random 300-digit prime. Choose a random 300-digit odd integer as a starting point. Successively, for each odd integer compute by the modular exponentiation technique of Section 3.5. If Fermat’s theorem guarantees that is not prime. This will probably throw out all the composites encountered. When you find an with you probably have a prime number. But how many do we have to examine before finding the prime? The Prime Number Theorem (see Subsection 3.1.2) says that the number of 300-digit primes is approximately so approximately 1 out of every 690 numbers is prime. But we are looking only at odd numbers, so we expect to find a prime approximately every 345 steps. Since the modular exponentiations can be done quickly, the whole process takes much less than a second on a laptop computer.
We’ll also need the analog of Fermat’s theorem for a composite modulus Let be the number of integers such that For example, if then there are four such integers, namely 1,3,7,9. Therefore, Often is called Euler’s -function.
If is a prime and then we must remove every th number in order to get the list of ’s with which yields
In particular,
More generally, it can be deduced from the Chinese remainder theorem that for any integer
where the product is over the distinct primes dividing When is the product of two distinct primes, this yields
If then
Proof. The proof of this theorem is almost the same as the one given for Fermat’s theorem. Let be the set of integers with Let be defined by As in the proof of Fermat’s theorem, the numbers for are the numbers in written in some order. Therefore,
Dividing out the factors we are left with
Note that when is prime, Euler’s theorem is the same as Fermat’s theorem.
What are the last three digits of ?
SOLUTION
Knowing the last three digits is the same as working mod 1000. Since we have Therefore, the last three digits are 343.
In this example, we were able to change the exponent 803 to 3 because
Compute
SOLUTION
Note that 101 is prime. From Fermat’s theorem, we know that Therefore,
In this case, we were able to change the exponent 43210 to 10 because
To summarize, we state the following, which is a generalization of what we know for primes:
Let be integers with and If then In other words, if you want to work mod you should work mod in the exponent.
Proof. Write Then
This completes the proof.
This extremely important fact will be used repeatedly in the remainder of the book. Review the preceding examples until you are convinced that the exponents mod and mod are what count (i.e., don’t be one of the many people who mistakenly try to work with the exponents mod 1000 and mod 101 in these examples).
Alice wishes to transfer a secret key (or any short message) to Bob via communication on a public channel. The Basic Principle can be used to solve this problem.
First, here is a nonmathematical way to do it. Alice puts into a box and puts her lock on the box. She sends the locked box to Bob, who puts his lock on the box and sends the box back to Alice. Alice then takes her lock off and sends the box to Bob. Bob takes his lock off, opens the box, and finds
Here is the mathematical realization of the method. First, Alice chooses a large prime number that is large enough to represent the key For example, if Alice were trying to send a 56-bit key, she would need a prime number that is at least 56 bits long. However, for security purposes (to make what is known as the discrete log problem hard), she would want to choose a prime significantly longer than 56 bits. Alice publishes so that Bob (or anyone else) can download it. Bob downloads Alice and Bob now do the following:
Alice selects a random number with and Bob selects a random number with We will denote by and the inverses of and mod
Alice sends to Bob.
Bob sends to Alice.
Alice sends to Bob.
Bob computes
At the end of this protocol, both Alice and Bob have the key
The reason this works is that Bob has computed Since the Basic Principle implies that
The procedure is usually attributed to Shamir and to Massey and Omura. One drawback is that it requires multiple communications between Alice and Bob. Also, it is vulnerable to the intruder-in-the-middle attack (see Chapter 15).
Consider the powers of
Note that we obtain all the nonzero congruence classes mod 7 as powers of 3. This means that 3 is a primitive root mod 7 (the term multiplicative generator might be better but is not as common). Similarly, every nonzero congruence class mod 13 is a power of 2, so 2 is a primitive root mod 13. However, so the powers of 3 mod 13 repeat much more frequently:
so only 1, 3, 9 are powers of 3. Therefore, 3 is not a primitive root mod 13. The primitive roots mod 13 are 2, 6, 7, 11.
In general, when is a prime, a primitive root mod is a number whose powers yield every nonzero class mod It can be shown that there are primitive roots mod In particular, there is always at least one. In practice, it is not difficult to find one, at least if the factorization of is known. See Exercise 54.
The following summarizes the main facts we need about primitive roots.
Let be a primitive root for the prime
Let be an integer. Then if and only if
If and are integers, then if and only if
A number is a primitive root mod if and only if is the smallest positive integer such that
Proof. If then for some Therefore,
by Fermat’s theorem. Conversely, suppose We want to show that divides so we divide into and try to show that the remainder is 0. Write
(this is just division with quotient and remainder ). We have
Suppose If we consider the powers of then we get back to 1 after steps. Then
so the powers of yield only the numbers Since not every number mod can be a power of This contradicts the assumption that is a primitive root.
The only possibility that remains is that This means that so divides This proves part (1).
For part (2), assume that (if not, switch and ). Suppose that Dividing both sides by yields By part (1), so Conversely, if then so again by part (1). Multiplying by yields the result.
For part (3), if is a primitive root, then part (1) says that any integer with must be a multiple of so is the smallest. Conversely, suppose is the smallest. Look at the numbers If two are congruent mod say with then (note: implies that so we can divide by ). Since this contradicts the assumption that is smallest. Therefore, the numbers must be distinct mod Since there are numbers on this list and there are numbers mod the two lists must be the same, up to order. Therefore, each number on the list is congruent to a power of so is a primitive root mod
Warning: is a primitive root mod if and only if is the smallest positive such that If you want to prove that is a primitive root, it does not suffice to prove that After all, Fermat’s theorem says that every satisfies this, as long as To prove that is a primitive root, you must show that is the smallest positive exponent such that
Finding the inverse of a matrix mod can be accomplished by the usual methods for inverting a matrix, as long as we apply the rule given in Section 3.3 for dealing with fractions. The basic fact we need is that a square matrix is invertible mod if and only if its determinant and are relatively prime.
We treat only small matrices here, since that is all we need for the examples in this book. In this case, the easiest way is to find the inverse of the matrix is to use rational numbers, then change back to numbers mod It is a general fact that the inverse of an integer matrix can always be written as another integer matrix divided by the determinant of the original matrix. Since we are assuming the determinant and are relatively prime, we can invert the determinant as in Section 3.3.
For example, in the case the usual formula is
so we need to find an inverse for
Suppose we want to invert Since we need the inverse of mod 11. Since we can replace by 5 and obtain
A quick calculation shows that
Suppose we want the inverse of
The determinant is 2 and the inverse of in rational numbers is
(For ways to calculate the inverse of a matrix, look at any book on linear algebra.) We can replace 1/2 with 6 mod 11 and obtain
Why do we need the determinant and to be relatively prime? Suppose where is the identity matrix. Then
Therefore, has an inverse mod which means that and must be relatively prime.
Suppose we are told that has a solution. How do we find one solution, and how do we find all solutions? More generally, consider the problem of finding all solutions of where is the product of two primes. We show in the following that this can be done quite easily, once the factorization of is known. Conversely, if we know all solutions, then it is easy to factor
Let’s start with the case of square roots mod a prime The easiest case is when and this suffices for our purposes. The case when is more difficult. See [Cohen, pp. 31–34] or [KraftW, p. 317].
Let be prime and let be an integer. Let
If has a square root mod then the square roots of mod are
If has no square root mod then has a square root mod and the square roots of are
Proof. If all the statements are trivial, so assume Fermat’s theorem says that Therefore,
This implies that so (See Exercise 13(a).) Therefore, at least one of and is a square mod Suppose both and are squares mod say and Then (work with fractions mod as in Section 3.3), which means is a square mod This is impossible when (see Exercise 26). Therefore, exactly one of and has a square root mod If has a square root mod then and the two square roots of are If has a square root, then
Let’s find the square root of 5 mod 11. Since we compute Since the square roots of 5 mod 11 are
Now let’s try to find a square root of 2 mod 11. Since we compute But so we have found a square root of rather than of 2. This is because 2 has no square root mod 11.
We now consider square roots for a composite modulus. Note that
means that
Therefore,
The Chinese remainder theorem tells us that a congruence mod 7 and a congruence mod 11 can be recombined into a congruence mod 77. For example, if and then In this way, we can recombine in four ways to get the solutions
Now let’s turn things around. Suppose is the product of two primes and we know the four solutions of From the construction just used above, we know that and (or the same congruences with and switched). Therefore, but This means that so we have found a nontrivial factor of (this is essentially the Basic Factorization Principle of Section 9.4).
For example, in the preceding example we know that Therefore, gives a nontrivial factor of 77.
Another example of computing square roots mod is given in Section 18.1.
Notice that all the operations used above are fast, with the exception of factoring In particular, the Chinese remainder theorem calculation can be done quickly. So can the computation of the gcd. The modular exponentiations needed to compute square roots mod and mod can be done quickly using successive squaring. Therefore, we can state the following principle:
Suppose is the product of two primes congruent to 3 mod 4, and suppose is a number relatively prime to that has a square root mod Then finding the four solutions to is computationally equivalent to factoring
In other words, if we can find the solutions, then we can easily factor conversely, if we can factor we can easily find the solutions. For more on this, see Section 9.4.
Now suppose someone has a machine that can find single square roots mod That is, if we give the machine a number that has a square root mod then the machine returns one solution of We can use this machine to factor as follows: Choose a random integer compute and give the machine The machine returns with If our choice of is truly random, then the machine has no way of knowing the value of hence it does not know whether or not, even if it knows all four square roots of So half of the time, but half of the time, In the latter case, we compute and obtain a nontrivial factor of Since there is a 50% chance of success for each time we choose if we choose several random values of then it is very likely that we will eventually factor Therefore, we conclude that any machine that can find single square roots mod can be used, with high probability, to factor
Suppose we want to determine whether or not has a solution, where is prime. If is small, we could square all of the numbers mod and see if is on the list. When is large, this is impractical. If we can use the technique of the previous section and compute If has a square root, then is one of them, so we simply have to square and see if we get If not, then has no square root mod The following proposition gives a method for deciding whether is a square mod that works for arbitrary odd
Let be an odd prime and let be an integer with Then The congruence has a solution if and only if
Proof. Let Then by Fermat’s theorem. Therefore (Exercise 15),
If then The hard part is showing the converse. Let be a primitive root mod Then for some If then
By the Proposition of Section 3.7, This implies that must be even: Therefore, so is a square mod
The criterion is very easy to implement on a computer, but it can be rather difficult to use by hand. In the following, we introduce the Legendre and Jacobi symbols, which give us an easy way to determine whether or not a number is a square mod They also are useful in primality testing (see Section 9.3).
Let be an odd prime and let Define the Legendre symbol
Some important properties of the Legendre symbol are given in the following.
Let be an odd prime.
If then
If then
If then
Proof. Part (1) is true because the solutions to are the same as those to when
Part (2) is the definition of the Legendre symbol combined with the previous proposition.
To prove part (3), we use part (2):
Since the left and right ends of this congruence are and they are congruent mod the odd prime they must be equal. This proves (3).
For part (4), use part (2) with
Again, since the left and right sides of this congruence are and they are congruent mod the odd prime they must be equal. This proves (4).
Let The nonzero squares mod 11 are We have
and (use property (1))
Therefore,
The Jacobi symbol extends the Legendre symbol from primes to composite odd integers One might be tempted to define the symbol to be if is a square mod and if not. However, this would cause the important property (3) to fail. For example, is not a square mod 35, and is not a square mod 35 (since they are not squares mod 5), but also the product is not a square mod 35 (since it is not a square mod 7). If Property (3) held, then we would have which is false.
In order to preserve property (3), we define the Jacobi symbol as follows. Let be an odd positive integer and let be a nonzero integer with Let
be the prime factorization of Then
The symbols on the right side are the Legendre symbols introduced earlier. Note that if the right side is simply one Legendre symbol, so the Jacobi symbol reduces to the Legendre symbol.
Let Then
Note that 2 is not a square mod 5, hence is not a square mod 135. Therefore, the fact that the Jacobi symbol has the value does not imply that 2 is a square mod 135.
The main properties of the Jacobi symbol are given in the following theorem. Parts (1), (2), and (3) can be deduced from those of the Legendre symbol. Parts (4) and (5) are much deeper.
Let be odd.
If and then
If then
Let be odd with Then
Note that we did not include a statement that This is usually not true for composite (see Exercise 45). In fact, the Solovay-Strassen primality test (see Section 9.3) is based on this fact.
Part (5) is the famous law of quadratic reciprocity, proved by Gauss in 1796. When and are primes, it relates the question of whether is a square mod to the question of whether is a square mod
A proof of the theorem when and are primes can be found in most elementary number theory texts. The extension to composite and can be deduced fairly easily from this case. See [Niven et al.], [Rosen], or [KraftW], for example.
When quadratic reciprocity is combined with the other properties of the Jacobi symbol, we obtain a fast way to evaluate the symbol. Here are two examples.
Let’s calculate
The only factorization needed in the calculation was removing powers of 2, which is easy to do. The fact that the calculations can be done without factoring odd numbers is important in the applications. The fact that the answer is implies that 4567 is not a square mod 12345. However, if the answer had been we could not have deduced whether 4567 is a square or is not a square mod 12345. See Exercise 44.
Let’s calculate
Since 137 is a prime, this says that 107 is a square mod 137. In contrast, during the calculation, we used the fact that This does not mean that 2 is a square mod 15. In fact, 2 is not a square mod 5, so it cannot be a square mod 15. Therefore, although we can interpret the final answer as saying that 107 is a square mod the prime 137, we should not interpret intermediate steps involving composite numbers as saying that a number is a square.
Suppose is the product of two large primes. If then we can conclude that is not a square mod What can we conclude if ? Since
there are two possibilities:
In the first case, is not a square mod therefore cannot be a square mod
In the second case, is a square mod and mod The Chinese remainder theorem can be used to combine a square root mod and a square root mod to get a square root of mod Therefore, is a square mod
Therefore, if then can be either a square or a nonsquare mod Deciding which case holds is called the quadratic residuosity problem. No fast algorithm is known for solving it. Of course, if we can factor then the problem can easily be solved by computing
Note: This section is more advanced than the rest of the chapter. It is included because finite fields are often used in cryptography. In particular, finite fields appear in four places in this book. The finite field is used in AES (Chapter 8). Finite fields give an explanation of some phenomena that are mentioned in Section 5.2. Finally, finite fields are used in Section 21.4, Chapter 22 and in error correcting codes (Chapter 24).
Many times throughout this book, we work with the integers mod where is a prime. We can add, subtract, and multiply, but what distinguishes working mod from working mod an arbitrary integer is that we can divide by any number that is nonzero mod For example, if we need to solve then we divide by 3 to obtain In contrast, if we want to solve there is no solution since we cannot divide by Loosely speaking, a set that has the operations of addition, multiplication, subtraction, and division by nonzero elements is called a field. We also require that the associative, commutative, and distributive laws hold.
The basic examples of fields are the real numbers, the complex numbers, the rational numbers, and the integers mod a prime. The set of all integers is not a field since we sometimes cannot divide and obtain an answer in the set (for example, 4/3 is not an integer).
Here is a field with four elements. Consider the set
with the following laws:
for all
for all
for all
Addition and multiplication are commutative and associative, and the distributive law holds for all
Since
we see that is the multiplicative inverse of Therefore, every nonzero element of has a multiplicative inverse, and is a field with four elements.
In general, a field is a set containing elements 0 and 1 (with ) and satisfying the following:
It has a multiplication and addition satisfying (1), (3), (5) in the preceding list.
Every element has an additive inverse (for each this means there exists an element such that ).
Every nonzero element has a multiplicative inverse.
A field is closed under subtraction. To compute simply compute
The set of matrices with real entries is not a field for two reasons. First, the multiplication is not commutative. Second, there are nonzero matrices that do not have inverses (and therefore we cannot divide by them). The set of nonnegative real numbers is not a field. We can add, multiply, and divide, but sometimes when we subtract the answer is not in the set.
For every power of a prime, there is exactly one finite field with elements, and these are the only finite fields. We’ll soon show how to construct them, but first let’s point out that if then the integers mod do not form a field. The congruence does not have a solution, so we cannot divide by even though Therefore, we need more complicated constructions to produce fields with elements.
The field with elements is called The “GF” is for “Galois field,” named for the French mathematician Evariste Galois (1811–1832), who did some early work related to fields.
Here is another way to produce the field Let be the set of polynomials whose coefficients are integers mod 2. For example, and are in this set. Also, the constant polynomials and are in We can add, subtract, and multiply in this set, as long as we work with the coefficients mod 2. For example,
since the term disappears mod 2. The important property for our purposes is that we can perform division with remainder, just as with the integers. For example, suppose we divide into We can do this by long division, just as with numbers:
In words, what we did was to divide by and obtain the as the first term of the quotient. Then we multiplied this times to get which we subtracted from leaving We divided this by and obtained the second term of the quotient, namely 1. Multiplying times and subtracting from left the remainder Since the degree of the polynomial is less than the degree of we stopped. The quotient was and the remainder was
We can write this as
Whenever we divide by we can obtain a remainder that is either 0 or a polynomial of degree at most 1 (if the remainder had degree 2 or more, we could continue dividing). Therefore, we define to be the set
of polynomials of degree at most 1, since these are the remainders that we obtain when we divide by Addition, subtraction, and multiplication are done mod This is completely analogous to what happens when we work with integers mod In the present situation, we say that two polynomials and are congruent mod written if and have the same remainder when divided by Another way of saying this is that is a multiple of This means that there is a polynomial such that
Now let’s multiply in For example,
(It might seem that the right side should be but recall that we are working with coefficients mod 2, so and are the same.) As another example, we have
It is easy to see that we are working with the set from before, with in place of
Working with mod a polynomial can be used to produce finite fields. But we cannot work mod an arbitrary polynomial. The polynomial must be irreducible, which means that it doesn’t factor into polynomials of lower degree mod 2. For example, which is irreducible when we are working with real numbers, is not irreducible when the coefficients are taken mod 2 since when we are working mod 2. However, is irreducible: Suppose it factors mod 2 into polynomials of lower degree. The only possible factors mod 2 are and and is not a multiple of either of these, even mod 2.
Here is the general procedure for constructing a finite field with elements, where is prime and We let denote the integers mod
is the set of polynomials with coefficients mod
Choose to be an irreducible polynomial mod of degree
Let be mod Then is a field with elements.
The fact that has elements is easy to see. The possible remainders after dividing by are the polynomials of the form where the coefficients are integers mod There are choices for each coefficient, hence possible remainders.
For each there are irreducible polynomials mod of degree so this construction produces fields with elements for each What happens if we do the same construction for two different polynomials and both of degree ? We obtain two fields, call them and It is possible to show that these are essentially the same field (the technical term is that the two fields are isomorphic), though this is not obvious since multiplication mod is not the same as multiplication mod
We can easily add, subtract, and multiply polynomials in but division is a little more subtle. Let’s look at an example. The polynomial is irreducible in (although there are faster methods, one way to show it is irreducible is to divide it by all polynomials of smaller degree in ). Consider the field
Since is not 0, it should have an inverse. The inverse is found using the analog of the extended Euclidean algorithm. First, perform the gcd calculation for The procedure (remainder divisor dividend ignore) is the same as for integers:
The last remainder is 1, which tells us that the “greatest common divisor” of and is 1. Of course, this must be the case, since is irreducible, so its only factors are 1 and itself.
Now work the Extended Euclidean algorithm to express 1 as a linear combination of and
| 1 | 0 | ||
| 0 | 1 | ||
| 1 | (1st row) (2nd row) | ||
| 1 | (2nd row) (3rd row). |
The end result is
Reducing mod we obtain
which means that is the multiplicative inverse of Whenever we need to divide by we can instead multiply by This is the analog of what we did when working with the usual integers mod
In Chapter 8, we discuss AES, which uses so let’s look at this field a little more closely. We’ll work mod the irreducible polynomial since that is the one used by AES. However, there are other irreducible polynomials of degree 8, and any one of them would lead to similar calculations. Every element can be represented uniquely as a polynomial
where each is 0 or 1. The 8 bits represent a byte, so we can represent the elements of as 8-bit bytes. For example, the polynomial becomes Addition is the XOR of the bits:
Multiplication is more subtle and does not have as easy an interpretation. That is because we are working mod the polynomial which we can represent by the 9 bits 100011011. First, let’s multiply by With polynomials, we calculate
The same operation with bits becomes
which corresponds to the preceding answer. In general, we can multiply by by the following algorithm:
Shift left and append a 0 as the last bit.
If the first bit is 0, stop.
If the first bit is 1, XOR with
The reason we stop in step 2 is that if the first bit is 0 then the polynomial still has degree less than 8 after we multiply by so it does not need to be reduced. To multiply by higher powers of multiply by several times. For example, multiplication by can be done with three shifts and at most three XORs. Multiplication by an arbitrary polynomial can be accomplished by multiplying by the various powers of appearing in that polynomial, then adding (i.e., XORing) the results.
In summary, we see that the field operations of addition and multiplication in can be carried out very efficiently. Similar considerations apply to any finite field.
The analogy between the integers mod a prime and polynomials mod an irreducible polynomial is quite remarkable. We summarize in the following.
Let denote the nonzero elements of This set, which has elements, is closed under multiplication, just as the integers not congruent to 0 mod are closed under multiplication. It can be shown that there is a generating polynomial such that every element in can be expressed as a power of This also means that the smallest exponent such that is This is the analog of a primitive root for primes. There are such generating polynomials, where is Euler’s function. An interesting situation occurs when and is prime. In this case, every nonzero polynomial in is a generating polynomial. (Remark, for those who know some group theory: The set is a group of prime order in this case, so every element except the identity is a generator.)
The discrete log problem mod a prime, which we’ll discuss in Chapter 10, has an analog for finite fields; namely, given find an integer such that in Finding such a is believed to be very hard in most situations.
We can now explain a phenomenon that is mentioned in Section 5.2 on LFSR sequences.
Suppose that we have a recurrence relation
For simplicity, we assume that the associated polynomial
is irreducible mod 2. Then is the field We regard as a vector space over with basis Multiplication by gives a linear transformation of this vector space. Since
multiplication by is represented by the matrix
Suppose we know We compute
Therefore, multiplication by shifts the indices by 1. It follows easily that multiplication on the right by the matrix sends to If the identity matrix, this must be the original vector Since there are nonzero elements in it follows from Lagrange’s theorem in group theory that which implies that Therefore, we know that
For any set of initial values (we’ll assume that at least one initial value is nonzero), the sequence will repeat after terms, where is the smallest positive integer such that It can be shown that divides
In fact, the period of such a sequence is exactly This can be proved as follows, using a few results from linear algebra: Let be the row vector of initial values. The sequence repeats when This means that the nonzero row vector is in the left null space of the matrix so But this means that there is a nonzero column vector in the right null space of That is, Since the matrix represents the linear transformation given by multiplication by with respect to the basis this can be changed back into a relation among polynomials:
But is a nonzero element of the field so we can divide by this element to get Since is the first time this happens, the sequence first repeats after terms, so it has period
As mentioned previously, when is prime, all polynomials (except 0 and 1) are generating polynomials for In particular, is a generating polynomial and therefore is the period of the recurrence.
There are many situations where we want to approximate a real number by a rational number. For example, we can approximate by But is a slightly better approximation, and it is more efficient in the sense that it uses a smaller denominator than The method of continued fractions is a procedure that yields this type of good approximations. In this section, we summarize some basic facts. For proofs and more details, see, for example, [Hardy-Wright], [Niven et al.], [Rosen], and [KraftW].
An easy way to approximate a real number is to take the largest integer less than or equal to This is often denoted by For example, If we want to get a better approximation, we need to look at the remaining fractional part. For this is This looks close to One way to express this is to look at We can approximate this last number by and therefore conclude that 1/7 is indeed a good approximation for and that is a good approximation for Continuing in this manner yields even better approximations. For example, the next step is to compute and then take the greatest integer to get 15 (yes, 16 is closer, but the algorithm corrects for this in the next step). We now have
If we continue one more step, we obtain
This last approximation is very accurate:
This procedure works for arbitrary real numbers. Start with a real number Let and Then (if otherwise, stop) define
We obtain the approximations
We have therefore produced a sequence of rational numbers It can be shown that each rational number gives a better approximation to than any of the preceding rational numbers with Moreover, the following holds.
If for integers then for some
For example, and
Continued fractions yield a convenient way to recognize rational numbers from their decimal expansions. For example, suppose we encounter the decimal 3.764705882 and we suspect that it is the beginning of the decimal expansion of a rational number with small denominator. The first few terms of the continued fraction are
The fact that 9803921 is large indicates that the preceding approximation is quite good, so we calculate
which agrees with all of the terms of the original 3.764605882. Therefore, is a likely candidate for the answer. Note that if we had included the 9803921, we would have obtained a fraction that also agrees with the original decimal expansion but has a significantly larger denominator.
Now let’s apply the procedure to We have
This yields the numbers
Note that the numbers 1, 9, 246, 1, 4 are the quotients obtained during the computation of in Subsection 3.1.3 (see Exercise 49).
Calculating the fractions such as
can become tiresome when done in the straightforward way. Fortunately, there is a faster method. Define
Then
Using these relations, we can compute the partial quotients from the previous ones, rather than having to start a new computation every time a new is found.
Find integers and such that
Find
Using the identity factor into a product of two integers greater than 1.
Using the congruence deduce that and show that is a multiple of 3.
Solve
Suppose you write a message as a number Encrypt as How would you decrypt? (Hint: Decryption is done by raising the ciphertext to a power mod 31. Fermat’s theorem will be useful.)
Solve
Find all solutions of
Find all solutions of
Find all solutions of
Find all solutions of
Let Show that if is composite then has a prime factor
Use the Euclidean algorithm to compute
Using the result of parts (a) and (b) and the fact that show that 257 is prime. (Remark: This method of computing one gcd, rather than doing several trial divisions (by 2, 3, 5, ...), is often faster for checking whether small primes divide a number.)
Compute
Compute
Factor 4883 and 4369 into products of primes.
What is ? Using the Extended Euclidean algorithm, find mod
What is ? Does mod 1111 exist?
Find where consists of repeated 1s. What can you say about mod as a function of ?
Let define the Fibonacci numbers Use the Euclidean algorithm to compute for all
Find
Let be formed with repeated 1’s and let be formed with repeated 1’s. Find (Hint: Compare your computations in parts (a) and (b).)
Let Show that none of the numbers are prime.
Let be prime. Suppose and are integers such that Show that either or
Show that if are integers with and then
Let be prime.
Show that if then
Show that if then has solutions with
Let be prime. Show that the only solutions to are (Hint: Apply Exercise 13(a) to )
Find with and
Suppose and What is congruent to mod 70?
Find with and (Hint: Replace with for a suitable and similarly for the second congruence.)
A group of people are arranging themselves for a parade. If they line up three to a row, one person is left over. If they line up four to a row, two people are left over, and if they line up five to a row, three people are left over. What is the smallest possible number of people? What is the next smallest number? (Hint: Interpret this problem in terms of the Chinese remainder theorem.)
You want to find such that when you divide by each of the numbers from 2 to 10, the remainder is 1. The smallest such is What is the next smallest ? (The answer is less than 3000.)
Find all four solutions to (Note that )
Find all solutions to (There are only two solutions in this case. This is because )
You need to compute A friend offers to help: 1 cent for each multiplication mod 581859289607. Your friend is hoping to get more than $650. Describe how you can have the friend do the computation for less than 25 cents. (Note: is the most commonly used RSA encryption exponent.)
Divide by 101. What is the remainder?
Divide by 11. What is the remainder?
Find the last 2 digits of
Let be prime. Show that has no solutions. (Hint: Suppose exists. Raise both sides to the power and use Fermat’s theorem. Also, because is odd.)
Let be prime. Show that for all
Let be prime and let and be integers. Show that
Evaluate
Use part (a) to find the last digit of (Note: means since the other possible interpretation would be which is written more easily without a second exponentiation.) (Hint: Use part (a) and the Basic Principle that follows Euler’s Theorem.)
You are told that exactly one of the numbers
is prime and you have one minute to figure out which one. Describe calculations you could do (with software such as MATLAB or Mathematica) that would give you a very good chance of figuring out which number is prime? Do not do the calculations. Do not try to factor the numbers. They do not have any prime factors less than You may use modular exponentiation, but you may not use commands of the form “IsPrime[n]” or “NextPrime[n].” (See Computer Problem 3 below.)
Let 13, or 19. Show that for all with
Let 13, or 19. Show that for all (Hint: Consider the case separately.)
Show that for all Composite numbers such that for all are called Carmichael numbers. They are rare (561 is another example), but there are infinitely many of them [Alford et al. 2].
Show that and
Show that
Is 341 prime?
Let be prime and let Let Show that
Use the method of part (a) to solve
You are appearing on the Math Superstars Show and, for the final question, you are given a 500-digit number and are asked to guess whether or not it is prime. You are told that is either prime or the product of a 200-digit prime and a 300-digit prime. You have one minute, and fortunately you have a computer. How would you make a guess that’s very probably correct? Name any theorems that you are using.
Compute for all of the divisors of (namely, 1, 2, 5, 10), and find the sum of these
Repeat part (a) for all of the divisors of 12.
Let Conjecture the value of where the sum is over the divisors of (This result is proved in many elementary number theory texts.)
Find a number mod 7 that is a primitive root mod 7 and find a number that is not a primitive root mod 7. Show that and have the desired properties.
Show that every nonzero congruence class mod 11 is a power of 2, and therefore 2 is a primitive root mod 11.
Note that Find such that (Hint: What is the inverse of ?)
Show that every nonzero congruence class mod 11 is a power of 8, and therefore 8 is a primitive root mod 11.
Let be prime and let be a primitive root mod Let with Let Show that
Let and be as in part (d). Show that is a primitive root mod (Remark: Since there are possibilities for the exponent in part (d), this yields all of the primitive roots mod )
Use the method of part (e) to find all primitive roots for given that 2 is a primitive root.
It is known that 14 is a primitive root for the prime Let (The exponent is )
Explain why
Explain why
Find the inverse of
Find all values of such that is invertible.
Find the inverse of
Find all primes for which is not invertible.
Use the Legendre symbol to show that has a solution.
Use the method of Section 3.9 to find a solution to
Use the Legendre symbol to determine which of the following congruences have solutions (each modulus is prime):
Let be odd and assume Show that if then is not a square mod
Show that
Show that 3 is not a square mod 35.
Let Show that
Show that
Show that
Use the procedure of Exercise 54 to show that 3 is a primitive root mod 65537. (Remark: The same proof shows that 3 is a primitive root for any prime such that is a power of 2. However, there are only six known primes with a power of 2; namely, 2, 3, 5, 17, 257, 65537. They are called Fermat primes.)
Show that the only irreducible polynomials in of degree at most 2 are and
Show that is irreducible in (Hint: If it factors, it must have at least one factor of degree at most 2.)
Show that and
Show that
Show that is irreducible in
Find the multiplicative inverse of in
Show that the quotients in the Euclidean algorithm for are exactly the numbers that appear in the continued fraction of
Compute several steps of the continued fractions of and Do you notice any patterns? (It can be shown that the ’s in the continued fraction of every irrational number of the form with rational and eventually become periodic.)
For each of let be such that in the continued fraction of Compute and and show that and give a solution of what is known as Pell’s equation:
Use the method of part (b) to solve
Compute several steps of the continued fraction expansion of Do you notice any patterns? (On the other hand, the continued fraction expansion of seems to be fairly random.)
Compute several steps of the continued fraction expansion of and compute the corresponding numbers and (defined in Section 3.12). The sequences and are what famous sequence of numbers?
Let and be integers with The order of mod is the smallest positive integer such that We denote
Show that
Show that if is a multiple of then
Suppose Write with (this is just division with remainder). Show that
Using the definition of and the fact that show that and therefore This, combined with part (b), yields the result that if and only if
Show that
This exercise will show by example how to use the results of Exercise 53 to prove a number is a primitive root mod a prime once we know the factorization of In particular, we’ll show that 7 is a primitive root mod 601. Note that
Show that if an integer divides 600, then it divides at least one of 300, 200, 120 (these numbers are 600/2, 600/3, and 600/5).
Show that if then it divides one of the numbers 300, 200, 120.
A calculation shows that
Why can we conclude that does not divide 300, 200, or 120?
Show that 7 is a primitive root mod 601.
In general, suppose is a prime and is the factorization of into primes. Describe a procedure to check whether a number is a primitive root mod (Therefore, if we need to find a primitive root mod we can simply use this procedure to test the numbers 2, 3, 5, 6, ... in succession until we find one that is a primitive root.)
We want to find an exponent such that
Observe that but It can be shown (Exercise 46) that 3 is a primitive root mod 65537, which implies that if and only if Use this to show that but 4096 does not divide (Hint: Raise both sides of to the 16th and to the 32nd powers.)
Use the result of part (a) to conclude that there are only 16 possible choices for that need to be considered. Use this information to determine This problem shows that if has a special structure, for example, a power of 2, then this can be used to avoid exhaustive searches. Therefore, such primes are cryptographically weak. See Exercise 12 in Chapter 10 for a reinterpretation of the present problem.
Let be an integer written in binary (for example, when we have ). Let and be integers. Perform the following procedure:
Start with and
If let If let
Let
If stop. If add 1 to and go to (2).
Show that
Let and be positive integers. Show that the following procedure computes
Start with
If is even, let and let
If is odd, let and let
If go to step 2.
Output
(Remark: This algorithm is similar to the one in part (a), but it uses the binary bits of in reverse order.)
Here is how to construct the guaranteed by the general form of the Chinese remainder theorem. Suppose are integers with whenever Let be integers. Perform the following procedure:
For let
For let
Let
Show for all
Alice designs a cryptosystem as follows (this system is due to Rabin). She chooses two distinct primes and (preferably, both and are congruent to 3 mod 4) and keeps them secret. She makes public. When Bob wants to send Alice a message he computes and sends to Alice. She makes a decryption machine that does the following: When the machine is given a number it computes the square roots of since it knows and There is usually more than one square root. It chooses one at random, and gives it to Alice. When Alice receives from Bob, she puts it into her machine. If the output from the machine is a meaningful message, she assumes it is the correct message. If it is not meaningful, she puts into the machine again. She continues until she gets a meaningful message.
Why should Alice expect to get a meaningful message fairly soon?
If Oscar intercepts (he already knows ), why should it be hard for him to determine the message ?
If Eve breaks into Alice’s office and thereby is able to try a few chosen-ciphertext attacks on Alice’s decryption machine, how can she determine the factorization of ?
This exercise shows that the Euclidean algorithm computes the gcd. Let be as in Subsection 3.1.3.
Let be a common divisor of Show that and use this to show that
Let be as in (a). Use induction to show that for all In particular, the last nonzero remainder.
Use induction to show that for
Using the facts that and show that and then Therefore, is a common divisor of
Use (b) to show that for all common divisors and therefore is the greatest common divisor.
Let and be distinct primes.
Show that among the integers satisfying there are multiples of and there are multiples of
Suppose Show that is a multiple of or a multiple of
Show that if then cannot be a multiple of both and
Show that the number of integers with such that is (Remark: This proves the formula that )
Give an example of integers with and integers such that the simultaneous congruences
have no solution.
Give an example of integers with and integers such that the simultaneous congruences
have a solution.
Evaluate
Find integers and with
Find integers and with
You are told that exactly one of the numbers
is prime and you have one minute to figure out which one. They do not have any prime factors less than You may use modular exponentiation, but you may not use commands of the form “IsPrime[n]” or “NextPrime[n].” (This makes explicit Exercise 30 above.)
Find the last five digits of (Note: Don’t ask the computer to print It is too large!)
Look at the decimal expansion of Find the consecutive digits 71, the consecutive digits 271, and the consecutive digits 4523 form primes. Find the first set of five consecutive digits that form a prime ( does not count as a five-digit number).
Solve
Find all solutions to
Find an integer such that when it is divided by 101 the remainder is 17, when it is divided by 201 the remainder is 18, and when it is divided by 301 the remainder is 19.
Let Show that Find an exponent such that
Let Find and with but
Let
Find the inverse of
For which primes does not have an inverse mod ?
Find the square roots of 26055 mod the prime 34807.
Find all square roots of 1522756 mod 2325781.
Try to find a square root of 48382 mod the prime 83987, using the method of Section 3.9. Square your answer to see if it is correct. What number did you find the square root of?
The one-time pad, which is an unbreakable cryptosystem, was described by Frank Miller in 1882 as a means of encrypting telegrams. It was rediscovered by Gilbert Vernam and Joseph Mauborgne around 1918. In terms of security, it is the best possible system, but implementation makes it unsuitable for most applications.
In this chapter, we introduce the one-time pad and show why a given key should not be used more than once. We then introduce the important concepts of perfect secrecy and ciphertext indistinguishability, topics that have become prominent in cryptography in recent years.
In many situations involving computers, it is more natural to represent data as strings of 0s and 1s, rather than as letters and numbers.
Numbers can be converted to binary (or base 2), if desired, which we’ll quickly review. Our standard way of writing numbers is in base 10. For example, 123 means . Binary uses 2 in place of 10 and needs only the digits 0 and 1. For example, 110101 in binary represents (which equals 53 in base 10).
Each 0 or 1 is called a bit. A representation that takes eight bits is called an eight-bit number, or a byte. The largest number that 8 bits can represent is 255, and the largest number that 16 bits can represent is 65535.
Often, we want to deal with more than just numbers. In this case, words, symbols, letters, and numbers are given binary representations. There are many possible ways of doing this. One of the standard ways is called ASCII, which stands for American Standard Code for Information Interchange. Each character is represented using seven bits, allowing for 128 possible characters and symbols to be represented. Eight-bit blocks are common for computers to use, and for this reason, each character is often represented using eight bits. The eighth bit can be used for checking parity to see if an error occurred in transmission, or is often used to extend the list of characters to include symbols such as ü and è .
Table 4.1 gives the ASCII equivalents for some standard symbols.
Start by representing the message as a sequence of 0s and 1s. This can be accomplished by writing all numbers in binary, for example, or by using ASCII, as discussed in the previous section. But the message could also be a digitalized video or audio signal.
The key is a random sequence of 0s and 1s of the same length as the message. Once a key is used, it is discarded and never used again. The encryption consists of adding the key to the message mod 2, bit by bit. This process is often called exclusive or, and is denoted by or . In other words, we use the rules , , . For example, if the message is 00101001 and the key is 10101100, we obtain the ciphertext as follows:
Decryption uses the same key. Simply add the key onto the ciphertext: .
A variation is to leave the plaintext as a sequence of letters. The key is then a random sequence of shifts, each one between 0 and 25. Decryption uses the same key, but subtracts instead of adding the shifts.
This encryption method is completely unbreakable for a ciphertext-only attack. For example, suppose the ciphertext is FIOWPSLQNTISJQL. The plaintext could be wewillwinthewar or it could be theduckwantsout. Each one is possible, along with all other messages of the same length. Therefore the ciphertext gives no information about the plaintext (except for its length). This will be made more precise in Section 4.4 and when we discuss Shannon’s theory of entropy in Chapter 20.
If we have a piece of the plaintext, we can find the corresponding piece of the key, but it will tell us nothing about the remainder of the key. In most cases a chosen plaintext or chosen ciphertext attack is not possible. But such an attack would only reveal the part of the key used during the attack, which would not be useful unless this part of the key were to be reused.
How do we implement this system, and where can it be used? The key can be generated in advance. Of course, there is the problem of generating a truly random sequence of 0s and 1s. One way would be to have some people sitting in a room flipping coins, but this would be too slow for most purposes. It is often suggested that we could take a Geiger counter and count how many clicks it makes in a small time period, recording a 0 if this number is even and 1 if it is odd, but care must be taken to avoid biases (see Exercise 12 in Chapter 5). There are other ways that are faster but not quite as random that can be used in practice (see Chapter 5); but it is easy to see that quickly generating a good key is difficult. Once the key is generated, it can be sent by a trusted courier to the recipient. The message can then be sent when needed. It is reported that the “hot line” between Washington, D.C., and Moscow used one-time pads for secure communications between the leaders of the United States and the U.S.S.R. during the Cold War.
A disadvantage of the one-time pad is that it requires a very long key, which is expensive to produce and expensive to transmit. Once the key is used up, it is dangerous to reuse it for a second message; any knowledge of the first message gives knowledge of the second, for example. Therefore, in most situations, various methods are used in which a small input can generate a reasonably random sequence of 0s and 1s, hence an “approximation” to a one-time pad. The amount of information carried by the courier is then several orders of magnitude smaller than the messages that will be sent. Two such methods, which are fast but not highly secure, are described in Chapter 5.
A variation of the one-time pad has been developed by Maurer, Rabin, Ding, and others. Suppose it is possible to have a satellite produce and broadcast several random sequences of bits at a rate fast enough that no computer can store more than a very small fraction of the outputs. Alice wants to send a message to Bob. They use a public key method such as RSA (see Chapter 9) to agree on a method of sampling bits from the random bit streams. Alice and Bob then use these bits to generate a key for a one-time pad. By the time Eve has decrypted the public key transmission, the random bits collected by Alice and Bob have disappeared, so Eve cannot decrypt the message. In fact, since the encryption used a one-time pad, she can never decrypt it, so Alice and Bob have achieved everlasting security for their message. Note that bounded storage is an integral assumption for this procedure. The production and the accurate sampling of the bit streams are also important implementation issues.
Alice sends messages to Bob, Carla, and Dante. She encrypts each message with a one-time pad, but she’s lazy and uses the same key for each message. In this section, we’ll show how Eve can decrypt all three messages.
Suppose the messages are and the key is . The ciphertexts are computed as . Eve computes
Similarly, she obtains and . The key has disappeared, and Eve’s task is to deduce from knowledge of , , . The following example shows some basic ideas of the method.
Let’s assume for simplicity that the messages are written in capital letters with spaces but with no other punctuation. The letters are converted to ASCII using
(the letters to are the numbers 65 through 90 written in binary, and space is 32 in binary; see Table 4.1).
The XORs of the messages are
Note that the first block of is 0000000. This means that the first letter of is the same as the first letter of . But the observation that makes the biggest difference for us is that “space” has a 0 as its leading bit, while all the letters have 1 as the leading bit. Therefore, if the leading bit of an XOR block is 1, then it arises from a letter XORed with “space.”
For example, the third block of is 1100100 and the third block of is 1100001. These can happen only from and , respectively. It follows easily that has “space” as its third entry, has as its third entry, and has as its third entry. Similarly, we obtain the 2nd, 7th, 8th, and 9th entries of each message.
The 5th entries cause a little more trouble: and tell us that there is “space” XORed with , and tells us that the 5th entries of and are equal, but we could have “space A A” or “A space space.” We need more information from surrounding letters to determine which it is.
To proceed, we use the fact that some letters are more common than others, and therefore certain XORs are more likely to arise from these than from others. For example, the block 0010001 is more likely to arise from than from . The most common letters in English are . Make a table of the XORs of each pair of these letters. If an XOR block occurs in this table, we guess that it comes from one of the pairs yielding this block. For example, 0001001 arises from and from , so we guess that the 11th block of comes from one of these two combinations. This might not be a correct guess, but we expect such guesses to be right often enough to yield a lot of information.
Rather than produce the whole table, we give the blocks that we need and the combinations of these frequent letters that produce them:
Let’s look at the next-to-last block of each XOR. In , we have 0010111, which could come from or . In , we have 0000100, which could come from or . In , we have 0010011, which could come from . The only combination consistent with these guesses is for the next-to-last letters of , respectively. This type of reasoning, combined with the information obtained from the occurrence of spaces, yields the following progress (- indicates a space, * indicates a letter to be determined):
The first letter of is a one-letter word. The XOR block 0000001 gives us the possibilities so we guess starts with . The XORs tell us that and start with .
Now let’s look at the 12th letters. The block 0001111 for suggests . The block for suggests and . The block for suggests . These do not have a common solution, so we know that one of the less common letters occurs in at least one of the messages. But ends with , and a good guess is that this should be . If so, then the XOR information tells us that ends in SECRET, and ends in TODAY, both of which are words. So this looks like a good guess. Our progress so far is the following:
It is time to revisit the 5th letters. We already know that they are “space A A” or “A space space.” The first possibility requires to have two consecutive one-letter words, which seems unlikely. The second possibility means that starts with the two words , so we can guess this is . The XOR information tells us that and have and , respectively. We now have all the letters:
Something seems to be wrong in the 6th column. These letters were deduced from the assumption that they all were common letters, and this must have been wrong. But at this point, we can make educated guesses. When we change to in , and make the corresponding changes in and required by the XORs, we get the final answer:
These techniques can also be used when there are only two messages, but progress is usually slower. More possible combinations must be tried, and more false deductions need to be corrected during the decryption. The process can be automated. See [Dawson-Nielsen].
The use of spaces and the restriction to capital letters made the decryption in the example easier. However, even if spaces are removed and more symbols are used, decryption is usually possible, though much more tedious.
During World War II, problems with production and distribution of one-time pads forced Russian embassies to reuse some keys. This was discovered, and part of the Venona Project by the U.S. Army’s Signal Intelligence Service (the predecessor to NSA) was dedicated to deciphering these messages. Information obtained this way revealed several examples of Russian espionage, for example, in the Manhattan Project’s development of atomic bombs, in the State Department, and in the White House.
Everyone knows that the one-time pad provides perfect secrecy. But what does this mean? In this section, we make this concept precise. Also, we know that it is very difficult in practice to produce a truly random key for a one-time pad. In the next section, we show quantitatively how biases in producing the key affect the security of the encryption.
In Section 20.4, we repeat some of the arguments of the present section and phrase them in terms of entropy and information theory.
The topics of this section and the next are part of the subject known as Provable Security. Rather than relying on intuition that a cryptosystem is secure, the goal is to isolate exactly what fundamental problems are the basis for its security. The result of the next section shows that the security of a one-time pad is based on the quality of the random number generator. In Section 10.5, we will show that the security of the ElGamal public key cryptosystem reduces to the difficulty of the Computational Diffie-Hellman problem, one of the fundamental problems related to discrete logarithms. In Section 12.3, we will use the Random Oracle Model to relate the security of a simple cryptosystem to the noninvertibility of a one-way function. Since these fundamental problems have been well studied, it is easier to gauge the security levels of the cryptosystems.
First, we need to define conditional probability. Let’s consider an example. We know that if it rains Saturday, then there is a reasonable chance that it will rain on Sunday. To make this more precise, we want to compute the probability that it rains on Sunday, given that it rains on Saturday. So we restrict our attention to only those situations where it rains on Saturday and count how often this happens over several years. Then we count how often it rains on both Saturday and Sunday. The ratio gives an estimate of the desired probability. If we call the event that it rains on Saturday and the event that it rains on Sunday, then the conditional probability of given is
where denotes the probability of the event . This formula can be used to define the conditional probability of one event given another for any two events and that have probabilities (we implicitly assume throughout this discussion that any probability that occurs in a denominator is nonzero).
Events and are independent if . For example, if Alice flips a fair coin, let be the event that the coin ends up Heads. If Bob rolls a fair six-sided die, let be the event that he rolls a 3. Then and . Since all 12 combinations of and are equally likely, , which equals . Therefore, and are independent.
If and are independent, then
which means that knowing that happens does not change the probability that happens. By reversing the steps in the above equation, we see that
An example of events that are not independent is the original example, where is the event that it rains on Saturday and is the event that it rains on Sunday, since . (Unfortunately, a widely used high school algebra text published around 2005 gave exactly one example of independent events: and .)
How does this relate to cryptography? In a cryptosystem, there is a set of possible keys. Let’s say we have keys. If we have a perfect random number generator to choose the keys, then the probability that the key is is . In this case we say that the key is chosen uniformly randomly. In any case, we assume that each key has a certain probability of being chosen. We also have various possible plaintexts and each one has a certain probability . These probably do not all have the same probability. For example, the message attack at noon is usually more probable than two plus two equals seven. Finally, each possible ciphertext has a probability .
We say that a cryptosystem has perfect secrecy if
for all possible plaintexts and all possible ciphertexts . In other words, knowledge of the ciphertext never changes the probability that a given plaintext occurs. This means that eavesdropping gives no advantage to Eve if she wants to guess the message.
We can now formalize what we claimed about the one-time pad.
If the key is chosen uniformly randomly, then the one-time pad has perfect secrecy.
Proof. We need to show that for each pair .
Let’s say that there are keys, each of which has probability . We start by showing that each possible ciphertext also has probability . Start with any plaintext . If is the ciphertext, then the key is . Therefore, the probability that is the ciphertext is the probability that is the key, namely , since all keys have this probability. Therefore, we have proved that
for each and .
We now combine the contributions from the various possibilities for . Note that if we sum over all possible , then
since this is the probability of the plaintext existing. Similarly, the event can be split into the disjoint sets , which yields
Applying Equation (4.1) twice yields
Since we have already proved that , we can multiply by to obtain
which says that the one-time pad has perfect secrecy.
One of the difficulties with using the one-time pad is that the number of possible keys is as least as large the number of possible messages. Unfortunately, this is required for perfect secrecy:
If a cryptosystem has perfect secrecy, then the number of possible keys is greater than or equal to the number of possible plaintexts.
Proof. Let be the number of possible plaintexts and let be the number of possible keys. Suppose . Let be a ciphertext. For each key , decrypt using the key . This gives possible plaintexts, and these are the only plaintexts that can encrypt to . Since , there is some plaintext that is not a decryption of . Therefore,
This contradicts the assumption that the system has perfect secrecy. Therefore, .
Suppose a parent goes to the pet store to buy a pet for a child’s birthday. The store sells 30 different pets with 3-letter names (ant, bat, cat, dog, eel, elk, ...). The parent chooses a pet at random, encrypts its name with a shift cipher, and sends the ciphertext to let the other parent know what has been bought. The child intercepts the message, which is ZBL. The child hopes the present is a dog. Since DOG is not a shift of ZBL, the child realizes that the conditional probability
and is disappointed. Since (because there are 30 equally likely possibilities), we have , so there is not perfect secrecy. This is because a given ciphertext has at most 26 possible corresponding plaintexts, so knowledge of the ciphertext restricts the possibilities for the decryption. Then the child realizes that YAK is the only possible shift of ZBL, so . This does not equal , so again we see that we don’t have perfect secrecy. But now the child is happy, being the only one in the neighborhood who will have a yak as a pet.
A basic requirement for a secure cryptosystem is ciphertext indistinguishability. This can be described by the following game:
CI Game: Alice chooses two messages and and gives them to Bob. Bob randomly chooses or 1. He encrypts to get a ciphertext , which he gives to Alice. Alice then guesses whether or was encrypted.
By randomly guessing, Alice can guess correctly about 1/2 of the time. If there is no strategy where she guesses correctly significantly more than 1/2 the time, then we say the cryptosystem has the ciphertext indistinguishability property.
For example, the shift cipher does not have this property. Suppose Alice chooses the two messages to be CAT and DOG. Bob randomly chooses one of them and sends back the ciphertext PNG. Alice observes that this cannot be a shift of DOG and thus concludes that Bob encrypted CAT.
When implemented in the most straightforward fashion, the RSA cryptosystem (see Chapter 9) also does not have the property. Since the encryption method is public, Alice can simply encrypt the two messages and compare with what Bob sends her. However, if Bob pads the messages with random bits before encryption, using a good pseudorandom number generator, then Alice should not be able to guess correctly significantly more than 1/2 the time because she will not know the random bits used in the padding.
The one-time pad where the key is chosen randomly has ciphertext indistinguishability. Because Bob chooses randomly,
From the previous section, we know that
and
Therefore, when Alice receives , the two possibilities are equally likely, so the probability she guesses correctly is 1/2.
Because the one-time pad is too unwieldy for many applications, pseudorandom generators are often used to generate substitutes for one-time pads. In Chapter 5, we discuss some possibilities. For the present, we analyze how much such a choice can affect the security of the system.
A pseudorandom key generator produces possible keys, with each possible key having a probability . Usually, it takes an input, called a seed, and applies some algorithm to produce a key that “looks random.” The seed is transmitted to the decrypter of the ciphertext, who uses the seed to produce the key and then decrypt. The seed is significantly shorter than the length of the key. While the key might have, for example, 1 million bits, the seed could have only 100 bits, which makes transmission much more efficient, but this also means that there are fewer keys than with the one-time pad. Therefore, Proposition 4.4 says that perfect secrecy is not possible.
If the seed had only 20 bits, it would be possible to use all of the seeds to generate all possible keys. Then, given a ciphertext and a plaintext, it would be easy to see if there is a key that encrypts the plaintext to the ciphertext. But with a seed of 100 bits, it is infeasible to list all seeds and find the corresponding keys. Moreover, with a good pseudorandom key generator, it should be difficult to see whether a given key is one that could be produced from some seed.
To evaluate a pseudorandom key generator, Alice (the adversary) and Bob play the following game:
R Game: Bob flips a fair coin. If it’s Heads, he chooses a number uniformly randomly from the keyspace. If it’s Tails, he chooses a pseudorandom key . Bob sends to Alice. Alice guesses whether was chosen randomly or pseudorandomly.
Of course, Alice could always guess that it’s random, for example, or she could flip her own coin and use that for her guess. In these cases, her probability of guessing correctly is 1/2. But suppose she knows something about the pseudorandom generator (maybe she has analyzed its inner workings, for example). Then she might be able to recognize sometimes that looks like something the pseudorandom generator could produce (of course, the random generator could also produce it, but with lower probability since it has many more possible outputs). This could occasionally give Alice a slight edge in guessing. So Alice’s overall probability of winning could increase slightly.
In an extreme case, suppose Alice knows that the pseudorandom number generator always has a 1 at the beginning of its output. The true random number generator will produce such an output with probability 1/2. If Alice sees this initial 1, she guesses that the output is from the pseudorandom generator. And if this 1 is not present, Alice knows that is random. Therefore, Alice guesses correctly with probability 3/4. (This is Exercise 9.)
We write
A good pseudorandom generator should have very small, no matter what strategy Alice uses.
But will a good pseudorandom key generator work in a one-time pad? Let’s test it with the CI Game. Suppose Charles is using a pseudorandom key generator for his one-time pad and he is going to play the CI game with Alice. Moreover, suppose Alice has a strategy for winning this CI Game with probability (if this happens, then Charles’s implementation is not very good). We’ll show that, under these assumptions, Alice can play the game with Bob and win with probability .
For example, suppose that the pseudorandom number generator is such that Alice has probability at most of winning the R Game. Then we must have , so . This means that the probability that Charles wins the CI game is at most . In this way, we conclude that if the random number generator is good, then its use in a one-time pad is good.
Here is Alice’s strategy for the game. Alice and Bob do the following:
Bob flips a fair coin, as in the game, and gives to Alice. Alice wants to guess whether is random or pseudorandom.
Alice uses to play the game with Charles.
She calls up Charles, who chooses messages and and gives them to Alice.
Alice chooses or 1 randomly.
She encrypts using the key to obtain the ciphertext .
She sends to Charles.
Charles makes his guess for and succeeds with probability .
Alice now uses Charles’s guess to finish playing the Game.
If Charles guessed correctly, her guess to Bob is that was pseudorandom.
If Charles guessed incorrectly, her guess to Bob is that was random.
There are two ways that Alice wins the R Game. One is when Charles is correct and is pseudorandom, and the other is when Charles is incorrect and is random.
The probability that Charles is correct when is pseudorandom is , by assumption. This means that
If is random, then Alice encrypted with a true one-time pad, so Charles succeeds half the time and fails half the time. Therefore,
Putting the previous two calculations together, we see that the probability that Alice wins the R Game is
as we claimed.
The preceding shows that if we design a good random key generator, then an adversary can gain only a very slight advantage in using a ciphertext to distinguish between two plaintexts. Unfortunately, there are not good ways to prove that a given pseudorandom number generator is good (this would require solving some major problems in complexity theory), but knowledge of where the security of the system lies is significant progress.
For a good introduction to cryptography via the language of computational security and proofs of security, see [Katz and Lindell].
Alice is learning about the shift cipher. She chooses a random three-letter word (so all three-letter words in the dictionary have the same probability) and encrypts it using a shift cipher with a randomly chosen key (that is, each possible shift has probability 1/26). Eve intercepts the ciphertext mxp.
Compute . (Hint: Can mxp shift to cat?)
Use your result from part (a) to show that the shift cipher does not have perfect secrecy (this is also true because there are fewer keys than ciphertexts; see the proposition at the end of the first section).
Alice is learning more advanced techniques for the shift cipher. She now chooses a random five-letter word (so all five-letter words in the dictionary have the same probability) and encrypts it using a shift cipher with a randomly chosen key (that is, each possible shift has probability 1/26). Eve intercepts the ciphertext evire. Show that
(Hint: Look at Exercise 1 in Chapter 2.)
Suppose a message is chosen randomly from the set of all five-letter English words and is encrypted using an affine cipher mod 26, where the key is chosen randomly from the 312 possible keys. The ciphertext is . Compute the conditional probability . Use the result of this computation to determine whether of not affine ciphers have perfect secrecy.
Alice is learning about the Vigenère cipher. She chooses a random six-letter word (so all six-letter words in the dictionary have the same probability) and encrypts it using a Vigenère cipher with a randomly chosen key of length 3 (that is, each possible key has probability ). Eve intercepts the ciphertext eblkfg.
Compute the conditional probability .
Use your result from part (a) to show that the Vigenère cipher does not have perfect secrecy.
Alice and Bob play the following game (this is the CI Game of Section 4.5). Alice chooses two two-letter words and and gives them to Bob. Bob randomly chooses or 1. He encrypts using a shift cipher (with a randomly chosen shift) to get a ciphertext , which he gives to Alice. Alice then guesses whether or was encrypted.
Alice chooses and . What is the probability that Alice guesses correctly?
Give a choice of and that Alice can make so that she is guaranteed to be able to guess correctly.
Bob has a weak pseudorandom generator that produces different -bit keys, each with probability . Alice and Bob play Game R. Alice makes a list of the possible pseudorandom keys. If the number that Bob gives to her is on this list, she guesses that the number is chosen pseudorandomly. If it is not on the list, she guess that it is random.
Show that
and
Show that
Show that Alice wins with probability
Show that if then Alice wins with probability . (This shows that if the pseudorandom generator misses a fraction of the possible keys, then Alice has an advantage in the game, provided that she can make a list of all possible outputs of the generator. Therefore, it is necessary to make large enough that making such a list is infeasible.)
Suppose Alice knows that Bob’s pseudorandom key generator has a slight bias and that with probability 51% it produces a key with more 1’s than 0’s. Alice and Bob play the CI Game. Alice chooses messages and to Bob, who randomly chooses and encrypts with a one-time pad using his pseudorandom key generator. He gives the ciphertext (where is his pseudorandom key) to Alice. Alice computes . If has more 1’s than 0’s, she guesses that . If not, she guesses that . For simplicity in the following, we assume that the message lengths are odd (so there cannot be the same number of 1’s and 0’s).
Show that exactly one of and has more 1’s than 0’s.
Show that
Show that
Show that Alice has a probability .51 of winning.
In the one-time pad, suppose that some plaintexts are more likely than others. Show that the key and the ciphertext are not independent. That is, show that there is a key and a ciphertext such that
(Hint: The right-hand side of this equation is independent of and . What about the left-hand side?)
Alice and Bob are playing the R Game. Suppose Alice knows that Bob’s pseudorandom number generator always has a 1 at the beginning of its output. If Alice sees this initial 1, she guesses that the output is from the pseudorandom generator. And if this 1 is not present, Alice knows that is random. Show that Alice guesses correctly with probability 3/4.
Suppose Bob uses a pseudorandom number generator to produce a one-time pad, but the generator has a slight bias, so each bit it produces has probability 51% of being 1 and only 49% of being 0. What strategy can Alice use so that she expects to win the CI Game more than half the time?
At the end of the semester, the professor randomly chooses and sends one of two possible messages:
To add to the excitement, the professor encrypts the message using one of the following methods:
Shift cipher
Vigenère cipher with key length 3
One-time pad.
You receive the ciphertext and want to decide whether the professor sent or . For each method (a), (b), (c), explain how to decide which message was sent or explain why it is impossible to decide. You may assume that you know which method is being used. (For the Vigenère, do not do frequency analysis; the message is too short.)
On Groundhog Day, the groundhog randomly chooses and sends one of two possible messages:
To add to the mystery, the groundhog encrypts the message using one of the following methods: shift cipher, Vigenère cipher with key length 4 (using four distinct shifts), one-time pad. For each of the following ciphertexts, determine which encryption methods could have produced that ciphertext, and for each of these possible encryption methods, decrypt the message or explain why it is impossible to decrypt.
ABCDEFGHIJKLMNOPQRST
UKZOQTGYGGMUQHYKPVGT
UQUMPHDVTJYIUJQQCSFL.
Alice encrypts the messages and with the same one-time pad using only capital letters and spaces, as in Section 4.3. Eve knows this, intercepts the ciphertexts and , and also learns that the decryption of is THE LETTER * ON THE MAP GIVES THE LOCATION OF THE TREASURE. Unfortunately for Eve, she cannot read the missing letter *. However, the 12th group of seven bits in is and the 12th group in is . Find the missing letter.
The one-time pad provides a strong form of secrecy, but since key transmission is difficult, it is desirable to devise substitutes that are easier to use. Stream ciphers are one way of achieving this goal. As in the one-time pad, the plaintext is written as a string of bits. Then a binary keystream is generated and XORed with the plaintext to produce the ciphertext.
plaintext bit, key bit, ciphertext bit.
For the system to be secure, the keystream needs to approximate a random sequence, so we need a good source of random-looking bits. In Section 5.1, we discuss pseudorandom number generators. In Sections 5.2 and 5.3, we describe two commonly used stream ciphers and the pseudorandom number generators that they use. Although they have security weaknesses, they give an idea of methods that can be used.
In the next chapter, we discuss block ciphers and various modes of operations. Some of the most secure stream ciphers are actually good block ciphers used, for example, in OFB or CTR mode. See Subsections 6.3.4 and 6.3.5.
There is one problem that is common to all stream ciphers that are obtained by XORing pseudorandom numbers with plaintext and is one of the reasons that authentication and message integrity checks are added to protect communications. Suppose Eve knows where the word “good” occurs in a plaintext that has been encrypted with a stream cipher. If she intercepts the ciphertext, she can XOR the bits for at the appropriate place in the ciphertext before continuing the transmission of the ciphertext. When the ciphertext is decrypted, “good” will be changed to “evil.” This type of attack was one of the weaknesses of the WEP system, which is discussed in Section 14.3.
The one-time pad and many other cryptographic applications require sequences of random bits. Before we can use a cryptographic algorithm, such as DES (Chapter 7) or AES (Chapter 8), it is necessary to generate a sequence of random bits to use as the key.
One way to generate random bits is to use natural randomness that occurs in nature. For example, the thermal noise from a semiconductor resistor is known to be a good source of randomness. However, just as flipping coins to produce random bits would not be practical for cryptographic applications, most natural conditions are not practical due to the inherent slowness in sampling the process and the difficulty of ensuring that an adversary does not observe the process. We would therefore like a method for generating randomness that can be done in software. Most computers have a method for generating random numbers that is readily available to the user. For example, the standard C library contains a function rand() that generates pseudorandom numbers between 0 and 65535. This pseudorandom function takes a seed as input and produces an output bitstream.
The rand() function and many other pseudorandom number generators are based on linear congruential generators. A linear congruential generator produces a sequence of numbers , where
The number is the initial seed, while the numbers , , and are parameters that govern the relationship. The use of pseudorandom number generators based on linear congruential generators is suitable for experimental purposes, but is highly discouraged for cryptographic purposes. This is because they are predictable (even if the parameters , , and are not known), in the sense that an eavesdropper can use knowledge of some bits to predict future bits with fairly high probability. In fact, it has been shown that any polynomial congruential generator is cryptographically insecure.
In cryptographic applications, we need a source of bits that is nonpredictable. We now discuss two ways to create such nonpredictable bits.
The first method uses one-way functions. These are functions that are easy to compute but for which, given , it is computationally infeasible to solve for . Suppose that we have such a one-way function and a random seed . Define for . If we let be the least significant bit of , then the sequence will often be a pseudorandom sequence of bits (but see Exercise 14). This method of random bit generation is often used, and has proven to be very practical. Two popular choices for the one-way function are DES (Chapter 7) and SHA, the Secure Hash Algorithm (Chapter 11). As an example, the cryptographic pseudorandom number generator in the OpenSSL toolkit (used for secure communications over the Internet) is based on SHA.
Another method for generating random bits is to use an intractable problem from number theory. One of the most popular cryptographically secure pseudorandom number generators is the Blum-Blum-Shub (BBS) pseudorandom bit generator, also known as the quadratic residue generator. In this scheme, one first generates two large primes and that are both congruent to 3 mod 4. We set and choose a random integer that is relatively prime to . To initialize the BBS generator, set the initial seed to . The BBS generator produces a sequence of random bits by
is the least significant bit of .
Let
Take . The initial seed is
The values for are
Taking the least significant bit of each of these, which is easily done by checking whether the number is odd or even, produces the sequence .
The Blum-Blum-Shub generator is very likely unpredictable. See [Blum-Blum-Shub]. A problem with BBS is that it can be slow to calculate. One way to improve its speed is to extract the least significant bits of . As long as , this seems to be cryptographically secure.
Note: In this section, all congruences are mod 2.
In many situations involving encryption, there is a trade-off between speed and security. If one wants a very high level of security, speed is often sacrificed, and vice versa. For example, in cable television, many bits of data are being transmitted, so speed of encryption is important. On the other hand, security is not usually as important since there is rarely an economic advantage to mounting an expensive attack on the system.
In this section, we describe a method that could be used when speed is more important than security. However, the real use is as one building block in more complex systems.
The sequence
can be described by giving the initial values
and the linear recurrence relation
This sequence repeats after 31 terms.
For another example, see Example 18 in the Computer Appendices.
More generally, consider a linear recurrence relation of length :
where the coefficients are integers. If we specify the initial values
then all subsequent values of can be computed using the recurrence. The resulting sequence of 0s and 1s can be used as the key for encryption. Namely, write the plaintext as a sequence of 0s and 1s, then add an appropriate number of bits of the key sequence to the plaintext mod 2, bit by bit. For example, if the plaintext is and the key sequence is the example given previously, we have
Decryption is accomplished by adding the key sequence to the ciphertext in exactly the same way.
One advantage of this method is that a key with large period can be generated using very little information. The long period gives an improvement over the Vigenère method, where a short period allowed us to find the key. In the above example, specifying the initial vector and the coefficients yielded a sequence of period 31, so 10 bits were used to produce 31 bits. It can be shown that the recurrence
and any nonzero initial vector will produce a sequence with period . Therefore, 62 bits produce more than two billion bits of key. This is a great advantage over a one-time pad, where the full two billion bits must be sent in advance.
This method can be implemented very easily in hardware using what is known as a linear feedback shift register (LFSR) and is very fast. In Figure 5.2 we depict an example of a linear feedback shift register in a simple case. More complicated recurrences are implemented using more registers and more XORs.
For each increment of a counter, the bit in each box is shifted to other boxes as indicated, with denoting the addition mod 2 of the incoming bits. The output, which is the bit , is added to the next bit of plaintext to produce the ciphertext. The diagram in Figure 5.2 represents the recurrence . Once the initial values are specified, the machine produces the subsequent bits very efficiently.
Unfortunately, the preceding encryption method succumbs easily to a known plaintext attack. More precisely, if we know only a few consecutive bits of plaintext, along with the corresponding bits of ciphertext, we can determine the recurrence relation and therefore compute all subsequent bits of the key. By subtracting (or adding; it’s all the same mod 2) the plaintext from the ciphertext mod 2, we obtain the bits of the key. Therefore, for the rest of this discussion, we will ignore the ciphertext and plaintext and assume we have discovered a portion of the key sequence. Our goal is to use this portion of the key to deduce the coefficients of the recurrence and consequently compute the rest of the key.
For example, suppose we know the initial segment of the sequence , which has period 15, and suppose we know it is generated by a linear recurrence. How do we determine the coefficients of the recurrence? We do not necessarily know even the length, so we start with length 2 (length 1 would produce a constant sequence). Suppose the recurrence is . Let and and use the known values . We obtain the equations
In matrix form, this is
The solution is , so we guess that the recurrence is . Unfortunately, this is not correct since Therefore, we try length 3. The resulting matrix equation is
The determinant of the matrix is 0 mod 2; in fact, the equation has no solution. We can see this because every column in the matrix sums to 0 mod 2, while the vector on the right does not.
Now consider length 4. The matrix equation is
The solution is . The resulting recurrence is now conjectured to be
A quick calculation shows that this generates the remaining bits of the piece of key that we already know, so it is our best guess for the recurrence that generates the key sequence.
What happens if we try length 5? The matrix equation is
The determinant of the matrix is 0 mod 2. Why? Notice that the last row is the sum of the first and second rows. This is a consequence of the recurrence relation: , etc. As in linear algebra with real or complex numbers, if one row of a matrix is a linear combination of other rows, then the determinant is 0.
Similarly, if we look at the matrix, we see that the 5th row is the sum of the first and second rows, and the 6th row is the sum of the second and third rows, so the determinant is 0 mod 2. In general, when the size of the matrix is larger than the length of the recurrence relation, the relation forces one row to be a linear combination of other rows, hence the determinant is 0 mod 2.
The general situation is as follows. To test for a recurrence of length , we assume we know . The matrix equation is
We show later that the matrix is invertible mod 2 if and only if there is no linear recurrence of length less than that is satisfied by .
A strategy for finding the coefficients of the recurrence is now clear. Suppose we know the first 100 bits of the key. For , form the matrix as before and compute its determinant. If several consecutive values of yield 0 determinants, stop. The last to yield a nonzero (i.e., 1 mod 2) determinant is probably the length of the recurrence. Solve the matrix equation to get the coefficients . It can then be checked whether the sequence that this recurrence generates matches the sequence of known bits of the key. If not, try larger values of .
Suppose we don’t know the first 100 bits, but rather some other 100 consecutive bits of the key. The same procedure applies, using these bits as the starting point. In fact, once we find the recurrence, we can also work backwards to find the bits preceding the starting point.
Here is an example. Suppose we have the following sequence of 100 bits:
The first 20 determinants, starting with , are
A reasonable guess is that gives the last nonzero determinant. When we solve the matrix equation for the coefficients we get
so we guess that the recurrence is
This recurrence generates all 100 terms of the original sequence, so we have the correct answer, at least based on the knowledge that we have.
Suppose that the 100 bits were in the middle of some sequence, and we want to know the preceding bits. For example, suppose the sequence starts with , so . Write the recurrence as
(it might appear that we made some sign errors, but recall that we are working mod 2, so and ). Letting yields
Continuing in this way, we successively determine .
For more examples, see Examples 19 and 20 in the Computer Appendices.
We now prove the result we promised.
Let be a sequence of bits produced by a linear recurrence mod 2. For each , let
Let be the length of the shortest recurrence that generates the sequence . Then and for all .
Proof. We first make a few remarks on the length of recurrences. A sequence could satisfy a length 3 relation such as . It would clearly then also satisfy shorter relations such as (at least for ). However, there are less obvious ways that a sequence could satisfy a recurrence of length less than expected. For example, consider the relation . Suppose the initial values of the sequence are . The recurrence allows us to compute subsequent terms: . It is easy to see that the sequence satisfies .
If there is a recurrence of length and if , then one row of the matrix is congruent mod 2 to a linear combination of other rows. For example, if the recurrence is , then the fourth row is the sum of the first and third rows. Therefore, for all .
Now suppose . Then there is a nonzero row vector such that . We’ll show that this gives a recurrence relation for the sequence and that the length of this relation is less than . This contradicts the assumption that is smallest. This contradiction implies that .
Let the recurrence of length be
For each , let
Then . The recurrence relation implies that
which is the last column of .
By the choice of , we have . Suppose that we know that for some . Then
Therefore, annihilates the last column of . Since the remaining columns of are columns of , we find that . By induction, we obtain for all .
Let . The first column of yields
Since is not the zero vector, for at least one . Let be the largest such that , which means that . We are working mod 2, so . Therefore, we can rearrange the relation to obtain
This is a recurrence of length . Since , and is assumed to be the shortest possible length, we have a contradiction. Therefore, the assumption that must be false, so . This completes the proof.
Finally, we make a few comments about the period of a sequence. Suppose the length of the recurrence is . Any consecutive terms of the sequence determine all future elements, and, by reversing the recurrence, all previous values, too. Clearly, if we have consecutive 0s, then all future values are 0. Also, all previous values are 0. Therefore, we exclude this case from consideration. There are strings of 0s and 1s of length in which at least one term is nonzero. Therefore, as soon as there are more than terms, some string of length must occur twice, so the sequence repeats. The period of the sequence is at most .
Associated to a recurrence , there is a polynomial
If is irreducible mod 2 (this means that it is not congruent to the product of two lower-degree polynomials), then it can be shown that the period divides . An interesting case is when is prime (these are called Mersenne primes). If the period isn’t 1, that is, if the sequence is not constant, then the period in this special case must be maximal, namely (see Section 3.11). The example where the period is is of this type.
Linear feedback shift register sequences have been studied extensively. For example, see [Golomb] or [van der Lubbe].
One way of thwarting the above attack is to use nonlinear recurrences, for example,
Moreover, a look-up table that takes inputs and outputs a bit could be used, or several LFSRs could be combined nonlinearly and some of these LFSRs could have irregular clocking. Generally, these systems are somewhat harder to break. However, we shall not discuss them here.
RC4 is a stream cipher that was developed by Rivest and has been widely used because of its speed and simplicity. The algorithm was originally secret, but it was leaked to the Internet in 1994 and has since been extensively analyzed. In particular, certain statistical biases were found in the keystream it generated, especially in the initial bits. Therefore, often a version called RC4-drop[n] is used, in which the first bits are dropped before starting the keystream. However, this version is still not recommended for situations requiring high security.
To start the generation of the keystream for RC4, the user chooses a key, which is a binary string between 40 and 256 bits long. This is put into the Key Scheduling Algorithm. It starts with an array consisting of the numbers from 0 to 255, regarded as 8-bit bytes, and outputs a permutation of these entries, as follows:
This algorithm starts by initializing the entries in as for running from 0 through 255. Suppose the user-supplied key is (let’s say the key length is 40).
The algorithm starts with and . The value of is updated to . Then and are swapped, so now and .
We now move to . The value is updated to , so is swapped with itself, which means it is not changed here.
We now move to . The value is updated to , so is swapped with , yielding and .
We now move to . The value is updated to , so and are swapped, yielding and .
Let’s look at one more value of , namely, . The value is updated to (recall that became 2 earlier), and we obtain and .
This process continues through and yields an array of length 256 consisting of a permutation of the numbers from 0 through 255.
The array is entered into the Pseudorandom Generation Algorithm.
This algorithm runs as long as needed and each round outputs a number between 0 and 255, regarded as an 8-bit byte. This byte is XORed with the corresponding byte of the plaintext to yield the ciphertext.
Weaknesses. Generally, the keystream that is output by a stream cipher should be difficult to distinguish from a randomly generated bitstream. For example, the R Game (see Section 4.5) could be played, and the probability of winning should be negligibly larger than 1/2. For RC4, there are certain observable biases. The second byte in the output should be 0 with probability 1/256. However, Mantin and Shamir [Mantin-Shamir] showed that this byte is 0 with twice that probability. Moreover, they found that the probability that the first two bytes are simultaneously 0 is instead of the expected .
Biases have also been found in the state that is output by the Key Scheduling Algorithm. For example, the probability that is about 37% larger than the expected probability of , while the probability that is 26% less than expected.
Although any key length from 40 to 255 bits can be chosen, the use of small key sizes is not recommended because the algorithm can succumb to a brute force attack.
A sequence generated by a length 3 recurrence starts 001110. Find the next four elements of the sequence.
The LFSR sequence is generated by a recurrence relation of length 3: . Find the coefficients .
The LFSR sequence is generated by a recurrence relation of length 4: . Find the coefficients .
The LFSR sequence is generated by a recurrence of length 3: . Find the coefficients , , and .
Suppose we build an LFSR machine that works mod 3 instead of mod 2. It uses a recurrence of length 2 of the form
to generate the sequence 1, 1, 0, 2, 2, 0, 1, 1. Set up and solve the matrix equation to find the coefficients and .
The sequence is generated by a recurrence relation
Determine .
Consider the sequence starting and defined by the length 3 recurrence . This sequence can also be given by a length 2 recurrence. Determine this length 2 recurrence by setting up and solving the appropriate matrix equations.
Suppose we build an LFSR-type machine that works mod 2. It uses a recurrence of length 2 of the form
to generate the sequence 1,1,0,0,1,1,0,0. Find and .
Suppose you modify the LFSR method to work mod 5 and you use a (not quite linear) recurrence relation
Find the coefficients and .
Suppose you make a modified LFSR machine using the recurrence relation , where are constants. Suppose the output starts 0, 1, 1, 0, 0, 1, 1, 0, 0. Find the constants .
Show that the sequence does not satisfy any linear recurrence of the form .
Bob has a great idea to generate pseudorandom bytes. He takes the decimal expansion of , which we assume is random, and chooses three consecutive digits in this expansion, starting at a randomly chosen point. He then regards what he gets as a three-digit integer, which he writes as a 10-digit number in binary. Finally, he chooses the last eight binary digits to get a byte. For example, if the digits of that he chooses are 159, he changes this to 0010011111. This yields the byte 10011111.
Show that this pseudorandom number generator produces some bytes more often than others.
Suppose Bob modifies his algorithm so that if his three-digit decimal integer is greater than or equal to 512, then he discards it and tries again. Show that this produces random output (assuming the same is true for ).
Suppose you have a Geiger counter and a radioactive source. If the Geiger counter counts an even number of radioactive particles in a second, you write 0. If it records an odd number of particles, you write 1. After a period of time, you have a binary sequence. It is reasonable to expect that the probability that particles are counted in a second satisfies a Poisson distribution
where is a parameter (in fact, is the average number of particles per second).
Show that if then .
Show that if then the binary sequence you obtain is expected to have more 0s than 1s.
More generally, show that, whenever ,
Show that for every ,
This problem shows that, although a Geiger counter might be a good source of randomness, the naive method of using it to obtain a pseudorandom sequence is biased.
Suppose that during the PRGA of RC4, there occur values of and such that and . The next values of in the algorithm are , with and . Show that , so this property continues for all future .
The values of before are and . Show that , so if this property occurs, then it occurred for all previous values of .
The starting values of and are and . Use this to show that there are never values of such that and .
Let be a one-way function. In Section 5.1, it was pointed out that usually the least significant bits of for ( is a seed) can be used to give a pseudorandom sequence of bits. Show how to append some bits to to obtain a new one-way function for which the sequence of least significant bits is not pseudorandom.
The following sequence was generated by a linear feedback shift register. Determine the recurrence that generated it.
1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0,0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0,0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1,1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0,1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1,1, 1, 1, 1, 1
(It is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name L101.)
The following are the first 100 terms of an LFSR output. Find the coefficients of the recurrence.
1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0,0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1,1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0,1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1,0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,1, 0, 0, 0, 0
(It is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name L100.)
The following ciphertext was obtained by XORing an LFSR output with the plaintext.
0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0,1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0,1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1
Suppose you know the plaintext starts
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0
Find the plaintext. (The ciphertext is stored in the downloadable computer files (bit.ly/2JbcS6p) under the name L011.)
In many classical cryptosystems, changing one letter in the plaintext changes exactly one letter in the ciphertext. In the shift, affine, and substitution ciphers, a given letter in the ciphertext always comes from exactly one letter in the plaintext. This greatly facilitates finding the key using frequency analysis. In the Vigenère system, the use of blocks of letters, corresponding to the length of the key, makes the frequency analysis more difficult, but still possible, since there is no interaction among the various letters in each block. Block ciphers avoid these problems by encrypting blocks of several letters or numbers simultaneously. A change of one character in a plaintext block should change potentially all the characters in the corresponding ciphertext block.
The Playfair cipher in Section 2.6 is a simple example of a block cipher, since it takes two-letter blocks and encrypts them to two-letter blocks. A change of one letter of a plaintext pair always changes at least one letter, and usually both letters, of the ciphertext pair. However, blocks of two letters are too small to be secure, and frequency analysis, for example, is usually successful.
Many of the modern cryptosystems that will be treated later in this book are block ciphers. For example, DES operates on blocks of 64 bits. AES uses blocks of 128 bits. RSA sometimes uses blocks more than 1000 bits long, depending on the modulus used. All of these block lengths are long enough to be secure against attacks such as frequency analysis.
Claude Shannon, in one of the fundamental papers on the theoretical foundations of cryptography [Shannon1], gave two properties that a good cryptosystem should have in order to hinder statistical analysis: diffusion and confusion.
Diffusion means that if we change a character of the plaintext, then several characters of the ciphertext should change, and, similarly, if we change a character of the ciphertext, then several characters of the plaintext should change. This means that frequency statistics of letters, digrams, etc. in the plaintext are diffused over several characters in the ciphertext, which means that much more ciphertext is needed to do a meaningful statistical attack.
Confusion means that the key does not relate in a simple way to the ciphertext. In particular, each character of the ciphertext should depend on several parts of the key. When a situation like this happens, the cryptanalyst probably needs to solve for the entire key simultaneously, rather than piece by piece.
The Vigenère and substitution ciphers do not have the properties of diffusion and confusion, which is why they are so susceptible to frequency analysis.
The concepts of diffusion and confusion play a role in any well-designed block cipher. Of course, a disadvantage (which is precisely the cryptographic advantage) of diffusion is error propagation: A small error in the ciphertext becomes a major error in the decrypted message, and usually means the decryption is unreadable.
The natural way of using a block cipher is to convert blocks of plaintext to blocks of ciphertext, independently and one at a time. This is called the electronic codebook (ECB) mode. Although it seems like the obvious way to implement a block cipher, we’ll see that it is insecure and that there are much better ways to use a block cipher. For example, it is possible to use feedback from the blocks of ciphertext in the encryption of subsequent blocks of plaintext. This leads to the cipher block chaining (CBC) mode and cipher feedback (CFB) mode of operation. These are discussed in Section 6.3.
For an extensive discussion of block ciphers, see [Schneier].
This section is not needed for understanding the rest of the chapter. It is included as an example of a block cipher.
In this section, we discuss the Hill cipher, which is a block cipher invented in 1929 by Lester Hill. It seems never to have been used much in practice. Its significance is that it was perhaps the first time that algebraic methods (linear algebra, modular arithmetic) were used in cryptography in an essential way. As we’ll see in later chapters, algebraic methods now occupy a central position in the subject.
Choose an integer , for example . The key is an matrix whose entries are integers mod 26. For example, let
The message is written as a series of row vectors. For example, if the message is abc, we change this to the single row vector . To encrypt, multiply the vector by the matrix (traditionally, the matrix appears on the right in the multiplication; multiplying on the left would yield a similar theory) and reduce mod 26:
Therefore, the ciphertext is AXW. (The fact that the first letter remained unchanged is a random occurrence; it is not a defect of the method.)
In order to decrypt, we need the determinant of to satisfy
This means that there is a matrix with integer entries such that , where is the identity matrix.
In our example, . The inverse of is
Since 17 is the inverse of mod 26, we replace by 17 and reduce mod 26 to obtain
The reader can check that .
For more on finding inverses of matrices mod , see Section 3.8. See also Example 15 in the Computer Appendices.
The decryption is accomplished by multiplying by , as follows:
In the general method with an matrix, break the plaintext into blocks of characters and change each block to a vector of integers between 0 and 25 using . For example, with the matrix as above, suppose our plaintext is
This becomes (we add an to fill the last space)
Now multiply each vector by , reduce the answer mod 26, and change back to letters:
In our case, the ciphertext is
It is easy to see that changing one letter of plaintext will usually change letters of ciphertext. For example, if is changed to , the first three letters of ciphertext change from to . This makes frequency counts less effective, though they are not impossible when is small. The frequencies of two-letter combinations, called digrams, and three-letter combinations, trigrams, have been computed. Beyond that, the number of combinations becomes too large (though tabulating the results for certain common combinations would not be difficult). Also, the frequencies of combinations are so low that it is hard to get meaningful data without a very large amount of text.
Now that we have the ciphertext, how do we decrypt? Simply break the ciphertext into blocks of length , change each to a vector, and multiply on the right by the inverse matrix . In our example, we have
and similarly for the remainder of the ciphertext.
For another example, see Example 21 in the Computer Appendices.
The Hill cipher is difficult to decrypt using only the ciphertext, but it succumbs easily to a known plaintext attack. If we do not know , we can try various values until we find the right one. So suppose is known. If we have of the blocks of plaintext of size , then we can use the plaintext and the corresponding ciphertext to obtain a matrix equation for (or for , which might be more useful). For example, suppose we know that and we have the plaintext
corresponding to the ciphertext
The first two blocks yield the matrix equation
Unfortunately, the matrix has determinant , which is not invertible mod 26 (though this matrix could be used to reduce greatly the number of choices for the encryption matrix). Therefore, we replace the last row of the equation, for example, by the fifth block to obtain
In this case, the matrix is invertible mod 26:
We obtain
Because the Hill cipher is vulnerable to this attack, it cannot be regarded as being very strong.
A chosen plaintext attack proceeds by the same strategy, but is a little faster. Again, if you do not know , try various possibilities until one works. So suppose is known. Choose the first block of plaintext to be , the second to be , and continue through the block being . The blocks of ciphertext will be the rows of the matrix .
For a chosen ciphertext attack, use the same strategy as for chosen plaintext, where the choices now represent ciphertext. The resulting plaintext will be the rows of the inverse matrix .
Suppose we have a block cipher. It can encrypt a block of plaintext of a fixed size, for example 64 bits. There are many circumstances, however, where it is necessary to encrypt messages that are either longer or shorter than the cipher’s block length. For example, a bank may be sending a terabyte of data to another bank. Or you might be sending a short message that needs to be encrypted one letter at a time since you want to produce ciphertext output as quickly as you write the plaintext input.
Block ciphers can be run in many different modes of operation, allowing users to choose appropriate modes to meet the requirements of their applications. There are five common modes of operation: electronic codebook (ECB), cipher block chaining (CBC), cipher feedback (CFB), output feedback (OFB), and counter (CTR) modes. We now discuss these modes.
The natural manner for using a block cipher is to break a long piece of plaintext into appropriately sized blocks of plaintext and process each block separately with the encryption function . This is known as the electronic codebook (ECB) mode of operation. The plaintext is broken into smaller chunks and the ciphertext is
where is the encryption of using the key .
There is a natural weakness in the ECB mode of operation that becomes apparent when dealing with long pieces of plaintext. Say an adversary Eve has been observing communication between Alice and Bob for a long enough period of time. If Eve has managed to acquire some plaintext pieces corresponding to the ciphertext pieces that she has observed, she can start to build up a codebook with which she can decipher future communication between Alice and Bob. Eve never needs to calculate the key ; she just looks up a ciphertext message in her codebook and uses the corresponding plaintext (if available) to decipher the message.
This can be a serious problem since many real-world messages consist of repeated fragments. E-mail is a prime example. An e-mail between Alice and Bob might start with the following header:
Date: Tue, 29 Feb 2000 13:44:38 -0500 (EST)
The ciphertext starts with the encrypted version of “Date: Tu”. If Eve finds that this piece of ciphertext often occurs on a Tuesday, she might be able to guess, without knowing any of the plaintext, that such messages are e-mail sent on Tuesdays. With patience and ingenuity, Eve might be able to piece together enough of the message’s header and trailer to figure out the context of the message. With even greater patience and computer memory, she might be able to piece together important pieces of the message.
Another problem that arises in ECB mode occurs when Eve tries to modify the encrypted message being sent to Bob. She might be able to extract important portions of the message and use her codebook to construct a false ciphertext message that she can insert in the data stream.
One method for reducing the problems that occur in ECB mode is to use chaining. Chaining is a feedback mechanism where the encryption of a block depends on the encryption of previous blocks.
In particular, encryption proceeds as
while decryption proceeds as
where is some chosen initial value. As usual, and denote the encryption and decryption functions for the block cipher.
Thus, in CBC mode, the plaintext is XORed with the previous ciphertext block and the result is encrypted. Figure 6.1 depicts CBC.
The initial value is often called the initialization vector, or the IV. If we use a fixed value for , say , and ever have the same plaintext message, the result will be that the resulting ciphertexts will be the same. This is undesirable since it allows the adversary to deduce that the same plaintext was created. This can be very valuable information, and can often be used by the adversary to infer the meaning of the original plaintext.
In practice, this problem is handled by always choosing randomly and sending in the clear along with the first ciphertext . By doing so, even if the same plaintext message is sent repeatedly, an observer will see a different ciphertext each time.
One of the problems with both the CBC and ECB methods is that encryption (and hence decryption) cannot begin until a complete block of plaintext data is available. The cipher feedback mode operates in a manner that is very similar to the way in which LFSRs are used to encrypt plaintext. Rather than use linear recurrences to generate random bits, the cipher feedback mode is a stream mode of operation that produces pseudorandom bits using the block cipher . In general, CFB operates in a mode, where each application produces random bits for XORing with the plaintext. For our discussion, however, we focus on the eight-bit version of CFB. Using the eight-bit CFB allows one 8-bit piece of message (e.g., a single character) to be encrypted without having to wait for an entire block of data to be available. This is useful in interactive computer communications, for example.
For concreteness, let’s assume that our block cipher encrypts blocks of 64 bits and outputs blocks of 64 bits (the sizes of the registers can easily be adjusted for other block sizes). The plaintext is broken into 8-bit pieces: , where each has eight bits, rather than the 64 bits used in ECB and CBC. Encryption proceeds as follows. An initial 64-bit is chosen. Then for , the following is performed:
where denotes the 8 leftmost bits of , denotes the rightmost 56 bits of , and denotes the string obtained by writing followed by . We present the CFB mode of operation in Figure 6.2.
Decryption is done with the following steps:
We note that decryption does not involve the decryption function, . This would be an advantage of running a block cipher in a stream mode in a case where the decryption function for the block cipher is slower than the encryption function.
Let’s step through one round of the CFB algorithm. First, we have a 64-bit register that is initialized with . These 64 bits are encrypted using and the leftmost eight bits of are extracted and XORed with the 8-bit to form . Then is sent to the recipient. Before working with , the 64-bit register is updated by extracting the rightmost 56 bits. The eight bits of are appended on the right to form . Then is encrypted by the same process, but using in place of . After is encrypted to , the 64-bit register is updated to form
By the end of the 8th round, the initial has disappeared from the 64-bit register and . The continue to pass through the register, so for example .
Note that CFB encrypts the plaintext in a manner similar to one-time pads or LFSRs. The key and the numbers are used to produce binary strings that are XORed with the plaintext to produce the ciphertext. This is a much different type of encryption than the ECB and CBC, where the ciphertext is the output of DES.
In practical applications, CFB is useful because it can recover from errors in transmission of the ciphertext. Suppose that the transmitter sends the ciphertext blocks , and is corrupted during transmission, so that the receiver observes . Decryption takes and produces a garbled version of with bit errors in the locations that had bit errors. Now, after decrypting this ciphertext block, the receiver forms an incorrect , which we denote . If was , then . When the receiver gets an uncorrupted and decrypts, then a completely garbled version of is produced. When forming , the decrypter actually forms . The decrypter repeats this process, ultimately getting bad versions of . When the decrypter calculates , the error block has moved to the leftmost block of as . At the next step, the error will have been flushed from the register, and and subsequent registers will be uncorrupted. For a simplified version of these ideas, see Exercise 18.
The CBC and CFB modes of operation exhibit a drawback in that errors propagate for a duration of time corresponding to the block size of the cipher. The output feedback mode (OFB) is another example of a stream mode of operation for a block cipher where encryption is performed by XORing the message with a pseudorandom bit stream generated by the block cipher. One advantage of the OFB mode is that it avoids error propagation.
Much like CFB, OFB may work on chunks of different sizes. For our discussion, we focus on the eight-bit version of OFB, where OFB is used to encrypt eight-bit chunks of plaintext in a streaming mode. Just as for CFB, we break our plaintext into eight-bit pieces, with . We start with an initial value , which has a length equal to the block length of the cipher, for example, 64 bits (the sizes of the registers can easily be adjusted for other block sizes). is often called the IV, and should be chosen to be random. is encrypted using the key to produce 64 bits of output, and the leftmost eight bits of the ciphertext are extracted. These are then XORed with the first eight bits of the plaintext to produce eight bits of ciphertext, .
So far, this is the same as what we were doing in CFB. But OFB differs from CFB in what happens next. In order to iterate, CFB updates the register by extracting the right 56 bits of and appending to the right side. Rather than use the ciphertext, OFB uses the output of the encryption. That is, OFB updates the register by extracting the right 56 bits of and appending to the right side.
In general, the following procedure is performed for :
We depict the steps for the OFB mode of operation in Figure 6.3. Here, the output stream is the encryption of the register containing the previous output from the block cipher. This output is then treated as a keystream and is XORed with the incoming plaintexts to produce a stream of ciphertexts. Decryption is very simple. We get the plaintext by XORing the corresponding ciphertext with the output keystream . Again, just like CFB, we do not need the decryption function .
So why would one want to build a stream cipher this way as opposed to the way the CFB stream cipher was built? There are a few key advantages to the OFB strategy. First, the generation of the output key stream may be performed completely without any plaintext. What this means is that the key stream can be generated in advance. This might be desirable for applications where we cannot afford to perform encryption operations as the plaintext message arrives.
Another advantage lies in its performance when errors are introduced to the ciphertext. Suppose a few errors are introduced to when it is delivered to the receiver. Then only those corresponding bits in the plaintext are corrupted when decryption is performed. Since we build future output streams using the encryption of the register, and not using the corrupted ciphertext, the output stream will always remain clean and the errors in the ciphertext will not propagate.
To summarize, CFB required the register to completely flush itself of errors, which produced an entire block length of garbled plaintext bits. OFB, on the other hand, will immediately correct itself.
There is one problem associated with OFB, however, that is common to all stream ciphers that are obtained by XORing pseudorandom numbers with plaintext. If Eve knows a particular plaintext and ciphertext , she can conduct the following attack. She first calculates
to get out the key stream. She may then create any false plaintext she wants. Now, to produce a ciphertext, she merely has to XOR with the output stream she calculated:
This allows her to modify messages.
The counter (CTR) mode builds upon the ideas that were used in the OFB mode. Just like OFB, CTR creates an output key stream that is XORed with chunks of plaintext to produce ciphertext. The main difference between CTR and OFB lies in the fact that the output stream in CTR is not linked to previous output streams.
CTR starts with the plaintext broken into eight-bit pieces, . We begin with an initial value , which has a length equal to the block length of the cipher, for example, 64 bits. Now, is encrypted using the key to produce 64 bits of output, and the leftmost eight bits of the ciphertext are extracted and XORed with to produce eight bits of ciphertext, .
Now, rather than update the register to contain the output of the block cipher, we simply take . In this way, does not depend on previous output. CTR then creates new output stream by encrypting . Similarly, we may proceed by using , and so on. The ciphertext is produced by XORing the left eight bits from the encryption of the register with the corresponding plaintext .
In general, the procedure for CTR is
for , and is presented in Figure 6.4. The reader might wonder what happens to if we continually add to it. Shouldn’t it eventually become too large? This is unlikely to happen, but if it does, we simply wrap around and start back at .
Just like OFB, the registers can be calculated ahead of time, and the actual encryption of plaintext is simple in that it involves just the XOR operation. As a result, its performance is identical to OFB’s when errors are introduced in the ciphertext. The advantage of CTR mode compared to OFB, however, stems from the fact that many output chunks may be calculated in parallel. We do not have to calculate before calculating . This makes CTR mode ideal for parallelizing.
As technology improves and more sophisticated attacks are developed, encryptions systems become less secure and need to be replaced. There are two main approaches to achieving increased security. The first involves using encryption multiple times and leads, for example, to triple encryption. The second approach is to find a new system that is more secure, a potentially lengthy process.
We start by describing the idea behind multiple encryption schemes. The idea is to encrypt the same plaintext multiple times using the same algorithm with different keys. Double encryption encrypts the plaintext by first encrypting with one key and then encrypting again using another key. For example, if the keyspace for single encryption has 56 bits, hence keys, then the new keyspace consists of keys. One might guess that double encryption should therefore double the security. This, however, is not true. Merkle and Hellman showed that the double encryption scheme actually has the security level of a 57-bit key. The reduction from to makes use of the meet-in-the-middle attack, which is described in the next section.
Since double encryption has a weakness, triple encryption is often used. This appears to have a level of security approximately equivalent to a 112-bit key (when the single encryption has a 56-bit key). There are at least two ways that triple encryption can be implemented. One is to choose three keys, , and perform . This type of triple encryption is sometimes called EEE. The other is to choose two keys, and , and perform . This is sometimes called EDE. When , this reduces to single encryption. Therefore, a triple encryption machine that is communicating with an older machine that still uses single encryption can simply set and proceed. This compatibility is the reason for using instead of in the middle; the use of instead of gives no extra cryptographic strength. Both versions of triple encryption are resistant to meet-in-the-middle attacks (compare with Exercise 11). However, there are other attacks on the two-key version ([Merkle-Hellman] and [van Oorschot-Wiener]) that indicate possible weaknesses, though they require so much memory as to be impractical.
Another strengthening of encryption was proposed by Rivest. Choose three keys, , and perform . In other words, modify the plaintext by XORing with , then apply encryption with , then XOR the result with . This method, when used with DES, is known as DESX and has been shown to be fairly secure. See [Kilian-Rogaway].
Alice and Bob are using an encryption method. The encryption functions are called , and the decryption functions are called , where is a key. We assume that if someone knows , then she also knows and (so Alice and Bob could be using one of the classical, nonpublic key systems such as DES or AES). They have a great idea. Instead of encrypting once, they use two keys and and encrypt twice. Starting with a plaintext message , the ciphertext is . To decrypt, simply compute . Eve will need to discover both and to decrypt their messages.
Does this provide greater security? For many cryptosystems, applying two encryptions is the same as using an encryption for some other key. For example, the composition of two affine functions is still an affine function (see Exercise 11 in Chapter 2). Similarly, using two RSA encryptions (with the same ) with exponents and corresponds to doing a single encryption with exponent . In these cases, double encryption offers no advantage. However, there are systems, such as DES (see Subsection 7.4.1) where the composition of two encryptions is not simply encryption with another key. For these, double encryption might seem to offer a much higher level of security. However, the following attack shows that this is not really the case, as long as we have a computer with a lot of memory.
Assume Eve has intercepted a message and a doubly encrypted ciphertext . She wants to find and . She first computes two lists:
for all possible keys
for all possible keys .
Finally, she compares the two lists and looks for matches. There will be at least one match, since the correct pair of keys will be one of them, but it is likely that there will be many matches. If there are several matches, she then takes another plaintext–ciphertext pair and determines which of the pairs she has found will encrypt the plaintext to the ciphertext. This should greatly reduce the list. If there is still more than one pair remaining, she continues until only one pair remains (or she decides that two or more pairs give the same double encryption function). Eve now has the desired pair .
If Eve has only one plaintext–ciphertext pair, she still has reduced the set of possible key pairs to a short list. If she intercepts a future transmission, she can try each of these possibilities and obtain a very short list of meaningful plaintexts.
If there are possible keys, Eve needs to compute values . She then needs to compute numbers and compare them with the stored list. But these computations (plus the comparisons) are much less than the computations required for searching through all key pairs .
This meet-in-the-middle procedure takes slightly longer than the exhaustive search through all keys for single encryption. It also takes a lot of memory to store the first list. However, the conclusion is that double encryption does not significantly raise the level of security.
Similarly, we could use triple encryption, using triples of keys. A similar attack brings the level of security down to at most what one might naively expect from double encryption, namely squaring the possible number of keys.
Suppose the single encryption has possible keys and the block cipher inputs and outputs blocks of 64 bits, as is the case with DES. The first list has entries, each of which is a 64-bit block. The probability that a given block in List 1 matches a given block in List 2 is . Since there are entries in List 2, we expect that a given block in List 1 matches entries of List 2. Running through the elements in List 1, we expect pairs for which there are matches between List 1 and List 2.
We know that one of these matches is from the correct pair , and the other matches are probably caused by randomness. If we take a random pair and try it on a new plaintext–ciphertext , then is a 64-bit block that has probability of matching the 64-bit block . Therefore, among the approximately random pairs, we expect matches between and . In other words, it is likely that the second plaintext–ciphertext pair eliminates all extraneous solutions and leaves only the correct key pair . If not, a third round should complete the task.
The ciphertext YIFZMA was encrypted by a Hill cipher with matrix . Find the plaintext.
The matrix mod 26 is not suitable for the matrix in a Hill cipher. Why?
The ciphertext text GEZXDS was encrypted by a Hill cipher with a matrix. The plaintext is solved. Find the encryption matrix .
Consider the following combination of Hill and Vigenère ciphers: The key consists of three matrices, , , . The plaintext letters are represented as integers mod 26. The first two are encrypted by , the next two by , the 5th and 6th by . This is repeated cyclically, as in the Vigenère cipher. Explain how to do a chosen plaintext attack on this system. Assume that you know that three matrices are being used. State explicitly what plaintexts you would use and how you would use the outputs.
Eve captures Bob’s Hill cipher machine, which uses a 2-by-2 matrix mod 26. She tries a chosen plaintext attack. She finds that the plaintext encrypts to and the plaintext encrypts to . What is the matrix .
Alice uses a Hill cipher with a matrix that is invertible mod 26. Describe a chosen plaintext attack that will yield the entries of the matrix . Explicitly say what plaintexts you will use.
The ciphertext text ELNI was encrypted by a Hill cipher with a matrix. The plaintext is dont. Find the encryption matrix.
Suppose the ciphertext is ELNK and the plaintext is still dont. Find the encryption matrix. Note that the second column of the matrix is changed. This shows that the entire second column of the encryption matrix is involved in obtaining the last character of the ciphertext.
Suppose the matrix is used for an encryption matrix in a Hill cipher. Find two plaintexts that encrypt to the same ciphertext.
Let be integers mod 26. Consider the following combination of the Hill and affine ciphers: Represent a block of plaintext as a pair mod 26. The corresponding ciphertext is
Describe how to carry out a chosen plaintext attack on this system (with the goal of finding the key ). You should state explicitly what plaintexts you choose and how to recover the key.
Alice is sending a message to Bob using a Hill cipher with a matrix. In fact, Alice is bored and her plaintext consists of the letter repeated a few hundred times. Eve knows what system is being used, but not the key, and intercepts the ciphertext. State how Eve will recognize that the plaintext is one repeated letter and decide whether or not Eve can deduce the letter and/or the key. (Note: The solution very much depends on the fact that the repeated letter is , rather than .)
Let denote encryption (for some cryptosystem) with key . Suppose that there are possible keys . Alice decides to encrypt a message as follows:
She chooses two keys and and double encrypts by computing
to get the ciphertext . Suppose Eve knows Alice’s method of encryption (but not and ) and has at least two plaintext–ciphertext pairs. Describe a method that is guaranteed to yield the correct and (and maybe a very small additional set of incorrect pairs). Be explicit enough to say why you are using at least two plaintext–ciphertext pairs. Eve may do up to computations.
Alice and Bob are arguing about which method of multiple encryption they should use. Alice wants to choose keys and and triple encrypt a message as . Bob wants to encrypt as . Which method is more secure? Describe in detail an attack on the weaker encryption method.
Alice and Bob are trying to implement triple encryption. Let denote DES encryption with key and let denote decryption.
Alice chooses two keys, and , and encrypts using the formula . Bob chooses two keys, and , and encrypts using the formula . One of these methods is more secure than the other. Say which one is weaker and explicitly give the steps that can be used to attack the weaker system. You may assume that you know ten plaintext–ciphertext pairs.
What is the advantage of using instead of in Alice’s version?
Suppose and are two encryption methods. Let and be keys and consider the double encryption
Suppose you know a plaintext–ciphertext pair. Show how to perform a meet-in-the-middle attack on this double encryption.
An affine encryption given by can be regarded as a double encryption, where one encryption is multiplying the plaintext by and the other is a shift by . Assume that you have a plaintext and ciphertext that are long enough that and are unique. Show that the meet-in-the-middle attack from part (a) takes at most 38 steps (not including the comparisons between the lists). Note that this is much faster than a brute force search through all 312 keys.
Let denote DES encryption with key . Suppose there is a public database consisting of DES keys and there is another public database of binary strings of length 64. Alice has five messages . She chooses a key from and a string from . She encrypts each message by computing . She uses the same and for each of the messages. She shows the five plaintext–ciphertext pairs to Eve and challenges Eve to find and . Alice knows that Eve’s computer can do only calculations, and there are pairs , so Alice thinks that Eve cannot find the correct pair. However, Eve has taken a crypto course. Show how she can find the and that Alice used. You must state explicitly what Eve does. Statements such as “Eve makes a list” are not sufficient; you must include what is on the lists and how long they are.
Alice wants to encrypt her messages securely, but she can afford only an encryption machine that uses a 25-bit key. To increase security, she chooses 4 keys and encrypts four times:
Eve finds several plaintext–ciphertext pairs encrypted with this set of keys. Describe how she can find (with high probability) the keys . (For this problem, assume that Eve can do at most computations, so she cannot try all combinations of keys.) (Note: If you use only one of the plaintext–ciphertext pairs in your solution, you probably have not done enough to determine the keys.)
Show that the decryption procedures given for the CBC and CFB modes actually perform the desired decryptions.
Consider the following simplified version of the CFB mode. The plaintext is broken into 32-bit pieces: , where each has 32 bits, rather than the eight bits used in CFB. Encryption proceeds as follows. An initial 64-bit is chosen. Then for , the following is performed:
where denotes the 32 leftmost bits of , denotes the rightmost 32 bits of , and denotes the string obtained by writing followed by .
Find the decryption algorithm.
The ciphertext consists of 32-bit blocks . Suppose that a transmission error causes to be received as , but that are received correctly. This corrupted ciphertext is then decrypted to yield plaintext blocks . Show that , but that for all . Therefore, the error affects only three blocks of the decryption.
The cipher block chaining (CBC) mode has the property that it recovers from errors in ciphertext blocks. Show that if an error occurs in the transmission of a block , but all the other blocks are transmitted correctly, then this affects only two blocks of the decryption. Which two blocks?
In CTR mode, the initial has 64 bits and is sent unencrypted to the receiver. (a) If is chosen randomly every time a message is encrypted, approximately how many messages must be sent in order for there to be a good chance that two messages use the same ? (b) What could go wrong if the same is used for two different messages? (Assume that the key is not changed.)
Suppose that in CBC mode, the final plaintext block is incomplete; that is, its length is less than the usual block size of, say, 64 bits. Often, this last block is padded with a binary string to make it have full length. Another method that can be used is called ciphertext stealing, as follows:
Compute .
Compute , where means we take the leftmost bits.
Compute , where denotes with enough 0s appended to give it the length of a full 64-bit block.
The ciphertext is . Therefore, the ciphertext has the same length as the plaintext.
Suppose you receive a message that used this ciphertext stealing for the final blocks (the ciphertext blocks were computed in the usual way for CBC). Show how to decrypt the ciphertext (you have the same key as the sender).
Suppose Alice has a block cipher with keys, Bob has one with keys, and Carla has one with keys. The only known way to break single encryption with each system is by brute force, namely trying all keys. Alice uses her system with single encryption. But Bob uses his with double encryption, and Carla uses hers with triple encryption. Who has the most secure system? Who has the weakest? (Assume that double and triple encryption do not reduce to using single or double encryption, respectively. Also, assume that some plaintext-ciphertext pairs are available for Alice’s single encryption, Bob’s double encryption, and Carla’s triple encryption.)
The following is the ciphertext of a Hill cipher
zirkzwopjjoptfapuhfhadrq
using the matrix
Decrypt.
In 1973, the National Bureau of Standards (NBS), later to become the National Institute of Standards and Technology (NIST), issued a public request seeking a cryptographic algorithm to become a national standard. IBM submitted an algorithm called LUCIFER in 1974. The NBS forwarded it to the National Security Agency, which reviewed it and, after some modifications, returned a version that was essentially the Data Encryption Standard (DES) algorithm. In 1975, NBS released DES, as well as a free license for its use, and in 1977 NBS made it the official data encryption standard.
DES was used extensively in electronic commerce, for example in the banking industry. If two banks wanted to exchange data, they first used a public key method such as RSA to transmit a key for DES, then they used DES for transmitting the data. It had the advantage of being very fast and reasonably secure.
From 1975 on, there was controversy surrounding DES. Some regarded the key size as too small. Many were worried about NSA’s involvement. For example, had they arranged for it to have a “trapdoor" – in other words, a secret weakness that would allow only them to break the system? It was also suggested that NSA modified the design to avoid the possibility that IBM had inserted a trapdoor in LUCIFER. In any case, the design decisions remained a mystery for many years.
In 1990, Eli Biham and Adi Shamir showed how their method of differential cryptanalysis could be used to attack DES, and soon thereafter they showed how these methods could succeed faster than brute force. This indicated that perhaps the designers of DES had been aware of this type of attack. A few years later, IBM released some details of the design criteria, which showed that indeed they had constructed the system to be resistant to differential cryptanalysis. This cleared up at least some of the mystery surrounding the algorithm.
DES lasted for a long time, but became outdated. Brute force searches (see Section 7.5), though expensive, can now break the system. Therefore, NIST replaced it with the system AES (see Chapter 8) in the year 2000. However, it is worth studying DES since it represents a popular class of algorithms and it was one of the most frequently used cryptographic algorithms in history.
DES is a block cipher; namely, it breaks the plaintext into blocks of 64 bits, and encrypts each block separately. The actual mechanics of how this is done is often called a Feistel system, after Horst Feistel, who was part of the IBM team that developed LUCIFER. In the next section, we give a simple algorithm that has many of the characteristics of this type of system, but is small enough to use as an example. In Section 7.3, we show how differential cryptanalysis can be used to attack this simple system. We give the DES algorithm in Section 7.4, Finally, in Section 7.5, we describe some methods used to break DES.
The DES algorithm is rather unwieldy to use for examples, so in the present section we present an algorithm that has many of the same features, but is much smaller. Like DES, the present algorithm is a block cipher. Since the blocks are encrypted separately, we assume throughout the present discussion that the full message consists of only one block.
The message has 12 bits and is written in the form , where consists of the first six bits and consists of the last six bits. The key has nine bits. The th round of the algorithm transforms an input to the output using an eight-bit key derived from .
The main part of the encryption process is a function that takes a six-bit input and an eight-bit input and produces a six-bit output. This will be described later.
The output for the th round is defined as follows:
where denotes XOR, namely bit-by-bit addition mod 2. This is depicted in Figure 7.1.
This operation is performed for a certain number of rounds, say , and produces the ciphertext .
How do we decrypt? Start with and switch left and right to obtain . (Note: This switch is built into the DES encryption algorithm, so it is not needed when decrypting DES.) Now use the same procedure as before, but with the keys used in reverse order . Let’s see how this works. The first step takes and gives the output
We know from the encryption procedure that and . Therefore,
The last equality again uses , so that is 0. Similarly, the second step of decryption sends to . Continuing, we see that the decryption process leads us back to . Switching the left and right halves, we obtain the original plaintext , as desired.
Note that the decryption process is essentially the same as the encryption process. We simply need to switch left and right and use the keys in reverse order. Therefore, both the sender and receiver use a common key and they can use identical machines (though the receiver needs to reverse left and right inputs).
So far, we have said nothing about the function . In fact, any would work in the above procedures. But some choices of yield much better security than others. The type of used in DES is similar to that which we describe next. It is built up from a few components.
The first function is an expander. It takes an input of six bits and outputs eight bits. The one we use is given in Figure 7.2.
This means that the first input bit yields the first output bit, the third input bit yields both the fourth and the sixth output bits, etc. For example, 011001 is expanded to 01010101.
The main components are called S-boxes. We use two:
The input for an S-box has four bits. The first bit specifies which row will be used: 0 for the first row, 1 for the second. The other three bits represent a binary number that specifies the column: 000 for the first column, 001 for the second, ..., 111 for the last column. The output for the S-box consists of the three bits in the specified location. For example, an input of 1010 for means we look at the second row, third column, which yields the output 110.
The key consists of nine bits. The key for the th round of encryption is obtained by using eight bits of , starting with the th bit. For example, if , then (after five bits, we reached the end of , so the last two bits were obtained from the beginning of ).
We can now describe . The input consists of six bits. The expander function is used to expand it to eight bits. The result is XORed with to produce another eight-bit number. The first four bits are sent to , and the last four bits are sent to . Each S-box outputs three bits, which are concatenated to form a six-bit number. This is . We present this in Figure 7.3.
For example, suppose and . We have
The first four bits are sent to and the last four bits are sent to . The second row, fifth column of contains 000. The second row, last column of contains 100. Putting these outputs one after the other yields .
We can now describe what happens in one round. Suppose the input is
and , as previously. This means that , as in the example just discussed. Therefore, . This is XORed with to yield . Since , we obtain
The output becomes the input for the next round.
For work on this and another simplified DES algorithm and how they behave under multiple encryption, see [Konikoff-Toplosky].
This section is rather technical and can be skipped on a first reading.
Differential cryptanalysis was introduced by Biham and Shamir around 1990, though it was probably known much earlier to the designers of DES at IBM and NSA. The idea is to compare the differences in the ciphertexts for suitably chosen pairs of plaintexts and thereby deduce information about the key. Note that the difference of two strings of bits can be found by XORing them. Because the key is introduced by XORing with , looking at the XOR of the inputs removes the effect of the key at this stage and hence removes some of the randomness introduced by the key. We’ll see that this allows us to deduce information as to what the key could be.
We eventually want to describe how to attack the above system when it uses four rounds, but we need to start by analyzing three rounds. Therefore, we temporarily start with instead of .
The situation is now as follows. We have obtained access to a three-round encryption device that uses the preceding procedure. We know all the inner workings of the encryption algorithm such as the S-boxes, but we do not know the key. We want to find the key by a chosen plaintext attack. We use various inputs and obtain outputs .
We have
Suppose we have another message with . For each , let and . Then is the “difference” (or sum; we are working mod 2) of and . The preceding calculation applied to yields a formula for . Since we have assumed that , we have . Therefore, and
This may be rearranged to
Finally, since and , we obtain
Note that if we know the input XOR, namely , and if we know the outputs and , then we know everything in this last equation except .
Now let’s analyze the inputs to the S-boxes used to calculate and . If we start with , we first expand and then XOR with to obtain , which are the bits sent to and . Similarly, yields . The XOR of these is
(the first equality follows easily from the bit-by-bit description of the expansion function). Therefore, we know
the XORs of the inputs to the two S-boxes (namely, the first four and the last four bits of );
the XORs of the two outputs (namely, the first three and the last three bits of ).
Let’s restrict our attention to . The analysis for will be similar. It is fairly fast to run through all pairs of four-bit inputs with a given XOR (there are only 16 of them) and see which ones give a desired output XOR. These can be computed once for all and stored in a table.
For example, suppose we have input XOR equal to 1011 and we are looking for output XOR equal to 100. We can run through the input pairs (1011, 0000), (1010, 0001), (1001, 0010), ..., each of which has XOR equal to 1011, and look at the output XORs. We find that the pairs (1010, 0001) and (0001, 1010) both produce output XORs 100. For example, 1010 means we look at the second row, third column of , which is 110. Moreover, 0001 means we look at the first row, second column, which is 010. The output XOR is therefore .
We know and . For example, suppose and . Therefore, and , so the inputs to are and , where denotes the left four bits of . If we know that the output XOR for is 100, then must be one of the pairs on the list we just calculated, namely (1010, 0001) and (0001, 1010). This means that or 1010.
If we repeat this procedure a few more times, we should be able to eliminate one of the two choices for and hence determine four bits of . Similarly, using , we find four more bits of . We therefore know eight of the nine bits of . The last bit can be found by trying both possibilities and seeing which one produces the same encryptions as the machine we are attacking.
Here is a summary of the procedure (for notational convenience, we describe it with both S-boxes used simultaneously, though in the examples we work with the S-boxes separately):
Look at the list of pairs with input and output .
The pair is on this list.
Deduce the possibilities for .
Repeat until only one possibility for remains.
We start with
and the machine encrypts in three rounds using the key , though we do not yet know . We obtain (note that since we are starting with , we start with the shifted key )
If we start with
(note that ), then
We have and . The inputs to have XOR equal to 1010 and the inputs to have XOR equal to 1011. The S-boxes have output XOR , so the output XOR from is 010 and that from is 100.
For the pairs , produces output XOR equal to 010. Since the first member of one of these pairs should be the left four bits of , the first four bits of are in . For the pairs , produces output XOR equal to 100. Since the first member of one of these pairs should be the right four bits of , the last four bits of are in .
Now repeat (with the same machine and the same key ) and with
A similar analysis shows that the first four bits of are in and the last four bits are in . Combining this with the previous information, we see that the first four bits of are 0011 and the last four bits are 0100. Therefore, (recall that starts with the fourth bit of .
It remains to find the third bit of . If we use , it encrypts to 001011101010, which is not , while yields the correct encryption. Therefore, the key is .
Suppose now that we have obtained access to a four-round device. Again, we know all the inner workings of the algorithm except the key, and we want to determine the key. The analysis we used for three rounds still applies, but to extend it to four rounds we need to use more probabilistic techniques.
There is a weakness in the box . If we look at the 16 input pairs with XOR equal to 0011, we discover that 12 of them have output XOR equal to 011. Of course, we expect on the average that two pairs should yield a given output XOR, so the present case is rather extreme. A little variation is to be expected; we’ll see that this large variation makes it easy to find the key.
There is a similar weakness in , though not quite as extreme. Among the 16 input pairs with XOR equal to 1100, there are eight with output XOR equal to 010.
Suppose now that we start with randomly chosen and such that . This is expanded to . Therefore the input XOR for is 0011 and the input XOR for is 1100. With probability 12/16 the output XOR for will be 011, and with probability 8/16 the output XOR for will be 010. If we assume the outputs of the two S-boxes are independent, we see that the combined output XOR will be 011010 with probability (12/16)(8/16) = 3/8. Because the expansion function sends bits 3 and 4 to both and , the two boxes cannot be assumed to have independent outputs, but 3/8 should still be a reasonable estimate for what happens.
Now suppose we choose and so that . Recall that in the encryption algorithm the output of the S-boxes is XORed with to obtain . Suppose the output XOR of the S-boxes is 011010. Then . Since , it follows that
Putting everything together, we see that if we start with two randomly chosen messages with XOR equal to , then there is a probability of around 3/8 that .
Here’s the strategy for finding the key. Try several randomly chosen pairs of inputs with XOR equal to 011010001100. Look at the outputs and . Assume that . Then use three-round differential cryptanalysis with and the known outputs to deduce a set of possible keys . When , which should happen around 3/8 of the time, this list of keys will contain , along with some other random keys. The remaining 5/8 of the time, the list should contain random keys. Since there seems to be no reason that any incorrect key should appear frequently, the correct key will probably appear in the lists of keys more often than the other keys.
Here is an example. Suppose we are attacking a four-round device. We try one hundred random pairs of inputs and . The frequencies of possible keys we obtain are in the following table. We find it easier to look at the first four bits and the last four bits of separately.
It is therefore likely that . Therefore, the key is 10*110000.
To determine the remaining bit, we proceed as before. We can compute that 000000000000 is encrypted to 100011001011 using and is encrypted to 001011011010 using . If the machine we are attacking encrypts 000000000000 to 100011001011, we conclude that the second key cannot be correct, so the correct key is probably .
The preceding attack can be extended to more rounds by extensions of these methods. It might be noticed that we could have obtained the key at least as quickly by simply running through all possibilities for the key. That is certainly true in this simple model. However, in more elaborate systems such as DES, differential cryptanalytic techniques are much more efficient than exhaustive searching through all keys, at least until the number of rounds becomes fairly large. In particular, the reason that DES uses 16 rounds appears to be because differential cryptanalysis is more efficient than exhaustive search until 16 rounds are used.
There is another attack on DES, called linear cryptanalysis, that was developed by Mitsuru Matsui [Matsui]. The main ingredient is an approximation of DES by a linear function of the input bits. It is theoretically faster than an exhaustive search for the key and requires around plaintext–ciphertext pairs to find the key. It seems that the designers of DES had not anticipated linear cryptanalysis. For details of the method, see [Matsui].
A block of plaintext consists of 64 bits. The key has 56 bits, but is expressed as a 64-bit string. The 8th, 16th, 24th, ..., bits are parity bits, arranged so that each block of eight bits has an odd number of 1s. This is for error detection purposes. The output of the encryption is a 64-bit ciphertext.
The DES algorithm, depicted in Figure 7.4, starts with a plaintext of 64 bits, and consists of three stages:
The bits of are permuted by a fixed initial permutation to obtain . Write , where is the first 32 bits of and is the last 32 bits.
For , perform the following:
where is a string of 48 bits obtained from the key and is a function to be described later.
Switch left and right to obtain , then apply the inverse of the initial permutation to get the ciphertext .
Decryption is performed by exactly the same procedure, except that the keys are used in reverse order. The reason this works is the same as for the simplified system described in Section 7.2. Note that the left–right switch in step 3 of the DES algorithm means that we do not have to do the left–right switch that was needed for decryption in Section 7.2.
We now describe the steps in more detail.
The initial permutation, which seems to have no cryptographic significance, but which was perhaps designed to make the algorithm load more efficiently into chips that were available in 1970s, can be described by the Initial Permutation table. This means that the 58th bit of becomes the first bit of , the 50th bit of becomes the second bit of , etc.
The function , depicted in Figure 7.5, is described in several steps.
First, is expanded to by the following table.
This means that the first bit of is the 32nd bit of , etc. Note that has 48 bits.
Compute , which has 48 bits, and write it as , where each has six bits.
There are eight S-boxes , given on page 150. is the input for . Write . The row of the S-box is specified by while determines the column. For example, if , we look at the row 01, which is the second row (00 gives the first row) and column 0100, which is the 5th column (0100 represents 4 in binary; the first column is numbered 0, so the fifth is labeled 4). The entry in in this location is 3, which is 3 in binary. Therefore, the output of is 0011 in this case. In this way, we obtain eight four-bit outputs .
The string is permuted according to the following table.
The resulting 32-bit string is .
Finally, we describe how to obtain . Recall that we start with a 64-bit .
The parity bits are discarded and the remaining bits are permuted by the following table.
Write the result as , where and have 28 bits.
For , let and . Here means shift the input one or two places to the left, according to the following table.
48 bits are chosen from the 56-bit string according to the following table. The output is .
It turns out that each bit of the key is used in approximately 14 of the 16 rounds.
A few remarks are in order. In a good cipher system, each bit of the ciphertext should depend on all bits of the plaintext. The expansion is designed so that this will happen in only a few rounds. The purpose of the initial permutation is not completely clear. It has no cryptographic purpose. The S-boxes are the heart of the algorithm and provide the security. Their design was somewhat of a mystery until IBM published the following criteria in the early 1990s (for details, see [Coppersmith1]).
Each S-box has six input bits and four output bits. This was the largest that could be put on one chip in 1974.
The outputs of the S-boxes should not be close to being linear functions of the inputs (linearity would have made the system much easier to analyze).
Each row of an S-box contains all numbers from 0 to 15.
If two inputs to an S-box differ by one bit, the outputs must differ by at least two bits.
If two inputs to an -box differ in exactly the middle two bits, then the outputs differ in at least two bits.
If two inputs to an S-box differ in their first two bits but have the same last two bits, the outputs must be unequal.
There are 32 pairs of inputs having a given XOR. For each of these pairs, compute the XOR of the outputs. No more than eight of these output XORs should be the same. This is clearly to avoid an attack via differential cryptanalysis.
A criterion similar to (7), but involving three S-boxes.
In the early 1970s, it took several months of searching for a computer to find appropriate S-boxes. Now, such a search could be completed in a very short time.
One possible way of effectively increasing the key size of DES is to double encrypt. Choose keys and and encrypt a plaintext by . Does this increase the security?
Meet-in-the-middle attacks on cryptosystems are discussed in Section 6.5. It is pointed out that, if an attacker has sufficient memory, double encryption provides little extra protection. Moreover, if a cryptosystem is such that double encryption is equivalent to a single encryption, then there is no additional security obtained by double encryption.
In addition, if double encryption is equivalent to single encryption, then the (single encryption) cryptosystem is much less secure than one might guess initially (see Exercise 3 in Chapter 12). If this were true for DES, for example, then an exhaustive search through all keys could be replaced by a search of length around , which would be quite easy to do.
For affine ciphers (Section 2.2) and for RSA (Chapter 9), double encrypting with two keys and is equivalent to encrypting with a third key . Is the same true for DES? Namely, is there a key such that ? This question is often rephrased in the equivalent form “Is DES a group?” (The reader who is unfamiliar with group theory can ask “Is DES closed under composition?”)
Fortunately, it turns out that DES is not a group. We sketch the proof. For more details, see [Campbell-Wiener]. Let represent encryption with the key consisting entirely of 0s and let represent encryption with the key consisting entirely of 1s. These keys are weak for cryptographic purposes (see Exercise 5). Moreover, D. Coppersmith found that applying repeatedly to certain plaintexts yielded the original plaintext after around iterations. A sequence of encryptions (for some plaintext )
where is the smallest positive integer such that , is called a cycle of length .
If is the smallest positive integer such that for all , and is the length of a cycle (so for a particular ), then divides .
Proof. Divide into , with remainder . This means that for some integer , and . Since , encrypting times with leaves unchanged. Therefore,
Since is the smallest positive integer such that , and , we must have . This means that , so divides .
Suppose now that DES is closed under composition. Then for some key . Moreover, are also represented by DES keys. Since there are only possible keys, we must have for some integers with (otherwise we would have distinct encryption keys). Decrypt times: , which is the identity map. Since , the smallest positive integer such that is the identity map also satisfies .
Coppersmith found the lengths of the cycles for 33 plaintexts . By the lemma, is a multiple of these cycle lengths. Therefore, is greater than or equal to the least common multiple of these cycle lengths, which turned out to be around . But if DES is closed under composition, we showed that . Therefore, DES is not closed under composition.
DES was the standard cryptographic system for the last 20 years of the twentieth century, but, in the latter half of this period, DES was showing signs of age. In this section we discuss the breaking of DES.
From 1975 onward, there were questions regarding the strength of DES. Many in the academic community complained about the size of the DES keys, claiming that a 56-bit key was insufficient for security. In fact, a few months after the NBS release of DES, Whitfield Diffie and Martin Hellman published a paper titled “Exhaustive cryptanalysis of the NBS Data Encryption Standard” [Diffie-Hellman2] in which they estimated that a machine could be built for $20 million (in 1977 dollars) that would crack DES in roughly a day. This machine’s purpose was specifically to attack DES, which is a point that we will come back to later.
In 1987 DES came under its second five-year review. At this time, NBS asked for suggestions whether to accept the standard for another period, to modify the standard, or to dissolve the standard altogether. The discussions regarding DES saw NSA opposing the recertification of DES. The NBS argued at that time that DES was beginning to show signs of weakness, given the current of level of computing power, and proposed doing away with DES entirely and replacing it with a set of NSA-designed algorithms whose inner workings would be known only to NSA and be well protected from reverse engineering techniques. This proposal was turned down, partially due to the fact that several key American industries would be left unprotected while replacement algorithms were put in place. In the end, DES was reapproved as a standard, yet in the process it was acknowledged that DES was showing signs of weakness.
Five years later, after NBS had been renamed NIST, the next five-year review came around. Despite the weaknesses mentioned in 1987 and the technology advances that had taken place in five years, NIST recertified the DES algorithm in 1992.
In 1993, Michael Wiener, a researcher at Bell-Northern Research, proposed and designed a device that would attack DES more efficiently than ever before. The idea was to use the already well-developed switching technology available to the telephone industry.
The year 1996 saw the formulation of three basic approaches for attacking symmetric ciphers such as DES. The first method was to do distributive computation across a vast collection of machines. This had the advantage that it was relatively cheap, and the cost that was involved could be easily distributed over many people. Another approach was to design custom architecture (such as Michael Wiener’s idea) for attacking DES. This promised to be more effective, yet also more expensive, and could be considered as the high-end approach. The middle-of-the-line approach involved programmable logic arrays and has received the least attention to date.
In all three of these cases, the most popular approach to attacking DES was to perform an exhaustive search of the keyspace. For DES this seemed to be reasonable since, as mentioned earlier, more complicated cryptanalytic techniques had failed to show significant improvement over exhaustive search.
The distributive computing approach to breaking DES became very popular, especially with the growing popularity of the Internet. In 1997 the RSA Data Security company issued a challenge to find the key and crack a DES encrypted message. Whoever cracked the message would win a $10,000 prize. Only five months after the announcement of the 1997 DES Challenge, Rocke Verser submitted the winning DES key. What is important about this is that it represents an example where the distributive computing approach had successfully attacked DES. Rocke Verser had implemented a program where thousands of computers spread over the Internet had managed to crack the DES cipher. People volunteered time on their personal (and corporate) machines, running Verser’s program under the agreement that Verser would split the winnings 60% to 40% with the owner of the computer that actually found the key. The key was finally found by Michael Sanders. Roughly 25% of the DES keyspace had been searched by that time. The DES Challenge phrase decrypted to “Strong cryptography makes the world a safer place.”
In the following year, RSA Data Security issued DES Challenge II. This time the correct key was found by Distributed Computing Technologies, and the message decrypted to “Many hands make light work.” The key was found after searching roughly 85% of the possible keys and was done in 39 days. The fact that the winner of the second challenge searched more of the keyspace and performed the task quicker than the first task shows the dramatic effect that a year of advancement in technology can have on cryptanalysis.
In the summer of 1998 the Electronic Frontier Foundation (EFF) developed a project called DES Cracker whose purpose was to reveal the vulnerability of the DES algorithm when confronted with a specialized architecture. The DES Cracker project was founded on a simple principle: The average computer is ill suited for the task of cracking DES. This is a reasonable statement since ordinary computers, by their very nature, are multipurpose machines that are designed to handle generic tasks such as running an operating system or even playing a computer game or two. What the EFF team proposed to do was build specialized hardware that would take advantage of the parallelizable nature of the exhaustive search. The team had a budget of $200,000.
We now describe briefly the architecture that the EFF team’s research produced. For more information regarding the EFF Cracker as well as the other tasks their cracker was designed to handle, see [Gilmore].
The EFF DES Cracker consisted of basically three main parts: a personal computer, software, and a large collection of specialized chips. The computer was connected to the array of chips and the software oversaw the tasking of each chip. For the most part, the software didn’t interact much with the hardware; it just gave the chips the necessary information to start processing and waited until the chips returned candidate keys. In this sense, the hardware efficiently eliminated a large number of invalid keys and only returned keys that were potentially promising. The software then processed each of the promising candidate keys on its own, checking to see if one of the promising keys was in fact the actual key.
The DES Cracker took a 128-bit (16-byte) sample of ciphertext and broke it into two 64-bit (8-byte) blocks of text. Each chip in the EFF DES Cracker consisted of 24 search units. A search unit was a subset of a chip whose task was to take a key and two 64-bit blocks of ciphertext and attempt to decrypt the first 64-bit block using the key. If the “decrypted” ciphertext looked interesting, then the search unit decrypted the second block and checked to see if that “decrypted” ciphertext was also interesting. If both decrypted texts were interesting then the search unit told the software that the key it checked was promising. If, when the first 64-bit block of ciphertext was decrypted, the decrypted text did not seem interesting enough, then the search unit incremented its key by 1 to form a new key. It then tried this new key, again checking to see if the result was interesting, and proceeded this way as it searched through its allotted region of keyspace.
How did the EFF team define an “interesting” decrypted text? First they assumed that the plaintext satisfied some basic assumption, for example that it was written using letters, numbers, and punctuation. Since the data they were decrypting was text, they knew each byte corresponded to an eight-bit character. Of the 256 possible values that an eight-bit character type represented, only 69 characters were interesting (the uppercase and lowercase alphabet, the numbers, the space, and a few punctuation marks). For a byte to be considered interesting, it had to contain one of these 69 characters, and hence had a chance of being interesting. Approximating this ratio to , and assuming that the decrypted bytes are in fact independent, we see that the chance that an 8-byte block of decrypted text was interesting is . Thus only of the keys it examined were considered promising.
This was not enough of a reduction. The software would still spend too much time searching false candidates. In order to narrow down the field of promising key candidates even further, it was necessary to use the second 8-byte block of text. This block was decrypted to see if the result was interesting. Assuming independence between the blocks, we get that only of the keys could be considered promising. This significantly reduced the amount of keyspace that the software had to examine.
Each chip consisted of 24 search units, and each search unit was given its own region of the keyspace that it was responsible for searching. A single 40-MHz chip would have taken roughly 38 years to search the entire keyspace. To reduce further the amount of time needed to process the keys, the EFF team used 64 chips on a single circuit board, then 12 boards to each chassis, and finally two chassis were connected to the personal computer that oversaw the communication with the software.
The end result was that the DES Cracker consisted of about 1500 chips and could crack DES in roughly 4.5 days on average. The DES Cracker was by no means an optimum model for cracking DES. In particular, each of the chips that it used ran at 40 MHz, which is slow by modern standards. Newer models could certainly be produced in the future that employ chips running at much faster clock cycles.
This development strongly indicated the need to replace DES. There were two main approaches to achieving increased security. The first used DES multiple times and led to the popular method called Triple DES or 3DES. Multiple encryption for block ciphers is discussed in Section 6.4.
The second approach was to find a new system that employs a larger key size than 56 bits. This led to AES, which is discussed in Chapter 8.
When you log in to a computer and enter your password, the computer checks that your password belongs to you and then grants access. However, it would be quite dangerous to store the passwords in a file in the computer. Someone who obtains that file would then be able to open anyone’s account. Making the file available only to the computer administrator might be one solution; but what happens if the administrator makes a copy of the file shortly before changing jobs? The solution is to encrypt the passwords before storing them.
Let be a one-way function. This means that it is easy to compute , but it is very difficult to solve for . A password can then be stored as , along with the user’s name. When the user logs in, and enters the password , the computer calculates and checks that it matches the value of corresponding to that user. An intruder who obtains the password file will have only the value of for each user. To log in to the account, the intruder needs to know , which is hard to compute since is a one-way function.
In many systems, the encrypted passwords are stored in a public file. Therefore, anyone with access to the system can obtain this file. Assume the function is known. Then all the words in a dictionary, and various modifications of these words (writing them backward, for example) can be fed into . Comparing the results with the password file will often yield the passwords of several users.
This dictionary attack can be partially prevented by making the password file not publicly available, but there is still the problem of the departing (or fired) computer administrator. Therefore, other ways of making the information more secure are also needed.
Here is another interesting problem. It might seem desirable that can be computed very quickly. However, a slightly slower can slow down a dictionary attack. But slowing down too much could also cause problems. If is designed to run in a tenth of a second on a very fast computer, it could take an unacceptable amount of time to log in on a slower computer. There doesn’t seem to be a completely satisfactory way to resolve this.
One way to hinder a dictionary attack is with what is called salt. Each password is randomly padded with an additional 12 bits. These 12 bits are then used to modify the function . The result is stored in the password file, along with the user’s name and the values of the 12-bit salt. When a user enters a password , the computer finds the value of the salt for this user in the file, then uses it in the computation of the modified , which is compared with the value stored in the file.
When salt is used and the words in the dictionary are fed into , they need to be padded with each of the possible values of the salt. This slows down the computations considerably. Also, suppose an attacker stores the values of for all the words in the dictionary. This could be done in anticipation of attacking several different password files. With salt, the storage requirements increase dramatically, since each word needs to be stored 4096 times.
The main purpose of salt is to stop attacks that aim at finding a random person’s password. In particular, it makes the set of poorly chosen passwords somewhat more secure. Since many people use weak passwords, this is desirable. Salt does not slow down an attack against an individual password (except by preventing use of over-the-counter DES chips; see below). If Eve wants to find Bob’s password and has access to the password file, she finds the value of the salt used for Bob and tries a dictionary attack, for example, using only this value of salt corresponding to Bob. If Bob’s password is not in the dictionary, this will fail, and Eve may have to resort to an exhaustive search of all possible passwords.
In many Unix password schemes, the one-way function was based on DES. The first eight characters of the password are converted to seven-bit ASCII (see Section 4.1). These 56 bits become a DES key. If the password is shorter than eight symbols, it is padded with zeros to obtain the 56 bits. The “plaintext” of all zeros is then encrypted using 25 rounds of DES with this key. The output is stored in the password file. The function
is believed to be one-way. Namely, we know the “ciphertext,” which is the output, and the “plaintext,” which is all zeros. Finding the key, which is the password, amounts to a known plaintext attack on DES, which is generally assumed to be difficult.
In order to increase security, salt is added as follows. A random 12-bit number is generated as the salt. Recall that in DES, the expansion function takes a 32-bit input (the right side of the input for the round) and expands it to 48 bits . If the first bit of the salt is 1, the first and 25th bits of are swapped. If the second bit of the salt is 1, the second and 26th bits of are swapped. This continues through the 12th bit of the salt. If it is 1, the 12th and 36th bits of are swapped. When a bit of the salt is 0, it causes no swap. If the salt is all zero, then no swaps occur and we are working with the usual DES. In this way, the salt means that 4096 variations of DES are possible.
One advantage of using salt to modify DES is that someone cannot use high-speed DES chips to compute the one-way function when performing a dictionary attack. Instead, a chip would need to be designed that tries all 4096 modifications of DES caused by the salt; otherwise the attack could be performed with software, which is much slower.
Salt in any password scheme is regarded by many as a temporary measure. As storage space increases and computer speed improves, a factor of 4096 quickly fades, so eventually a new system must be developed.
For more on password protocols, see Section 12.6.
Consider the following DES-like encryption method. Start with a message of bits. Divide it into two blocks of length (a left half and a right half): . The key consists of bits, for some integer . There is a function that takes an input of bits and bits and gives an output of bits. One round of encryption starts with a pair . The output is the pair , where
( means XOR, which is addition mod 2 on each bit). This is done for rounds, so the ciphertext is .
If you have a machine that does the -round encryption just described, how would you use the same machine to decrypt the ciphertext (using the same key )? Prove that your decryption method works.
Suppose has bits and , and suppose the encryption process consists of rounds. If you know only a ciphertext, can you deduce the plaintext and the key? If you know a ciphertext and the corresponding plaintext, can you deduce the key? Justify your answers.
Suppose has bits and , and suppose the encryption process consists of rounds. Why is this system not secure?
Bud gets a budget 2-round Feistel system. It uses a 32-bit , a 32-bit , and a 32-bit key . The function is , with the same key for each round. Moreover, to avoid transmission errors, he always uses a 32-bit message and lets . Eve does not know Bud’s key, but she obtains the ciphertext for one of Bud’s encryptions. Describe how Eve can obtain the plaintext and the key .
As described in Section 7.6, a way of storing passwords on a computer is to use DES with the password as the key to encrypt a fixed plaintext (usually 000 . . . 0). The ciphertext is then stored in the file. When you log in, the procedure is repeated and the ciphertexts are compared. Why is this method more secure than the similar-sounding method of using the password as the plaintext and using a fixed key (for example, )?
Nelson produces budget encryption machines for people who cannot afford a full-scale version of DES. The encryption consists of one round of a Feistel system. The plaintext has 64 bits and is divided into a left half and a right half . The encryption uses a function that takes an input string of 32 bits and outputs a string of 32 bits. (There is no key; anyone naive enough to buy this system should not be trusted to choose a key.) The left half of the ciphertext is and the right half is . Suppose Alice uses one of these machines to encrypt and send a message to Bob. Bob has an identical machine. How does he use the machine to decrypt the ciphertext he receives? Show that this decryption works (do not quote results about Feistel systems; you are essentially justifying that a special case works).
Let be the DES key consisting of all 1s. Show that if , then , so encryption twice with this key returns the plaintext. (Hint: The round keys are sampled from . Decryption uses these keys in reverse order.)
Find another key with the same property as in part (a).
Alice uses quadruple DES encryption. To save time, she chooses two keys, and , and encrypts via . One day, Alice chooses to be the key of all 1s and to be the key of all 0s. Eve is planning to do a meet-in-the-middle attack, but after examining a few plaintext–ciphertext pairs, she decides that she does not need to carry out this attack. Why? (Hint: Look at Exercise 5.)
For a string of bits , let denote the complementary string obtained by changing all the 1s to 0s and all the 0s to 1s (equivalently, ). Show that if the DES key encrypts to , then encrypts to . (Hint: This has nothing to do with the structure of the S-boxes. To do the problem, just work through the encryption algorithm and show that the input to the S-boxes is the same for both encryptions. A key point is that the expansion of is the complementary string for the expansion of .)
Suppose we modify the Feistel setup as follows. Divide the plaintext into three equal blocks: , , . Let the key for the th round be and let be some function that produces the appropriate size output. The th round of encryption is given by
This continues for rounds. Consider the decryption algorithm that starts with the ciphertext and uses the algorithm
This continues for rounds, down to . Show that for all , so that the decryption algorithm returns the plaintext. (Remark: Note that the decryption algorithm is similar to the encryption algorithm, but cannot be implemented on the same machine as easily as in the case of the Feistel system.)
Suppose is the DES encryption of a message using the key . We showed in Exercise 7 that DES has the complementation property, namely that if then , where is the bit complement of . That is, the bitwise complement of the key and the plaintext result in the bitwise complement of the DES ciphertext. Explain how an adversary can use this property in a brute force, chosen plaintext attack to reduce the expected number of keys that would be tried from to . (Hint: Consider a chosen plaintext set of and ).
(For those who are comfortable with programming)
Write a program that performs one round of the simplified DES-type algorithm presented in Section 7.2.
Create a sample input bitstring, and a random key. Calculate the corresponding ciphertext when you perform one round, two rounds, three rounds, and four rounds of the Feistel structure using your implementation. Verify that the decryption procedure works in each case.
Let denote four-round encryption using the key . By trying all keys, show that there are no weak keys for this simplified DES-type algorithm. Recall that a weak key is one such that when we encrypt a plaintext twice we get back the plaintext. That is, a weak key satisfies for every possible . (Note: For each key , you need to find some such that .)
Suppose you modify the encryption algorithm to create a new encryption algorithm by swapping the left and right halves after the four Feistel rounds. Are there any weak keys for this algorithm?
Using your implementation of from Computer Problem 1(b), implement the CBC mode of operation for this simplified DES-type algorithm.
Create a plaintext message consisting of bits, and show how it encrypts and decrypts using CBC.
Suppose that you have two plaintexts that differ in the th bit. Show the effect that this has on the corresponding ciphertexts.
In 1997, the National Institute of Standards and Technology put out a call for candidates to replace DES. Among the requirements were that the new algorithm should allow key sizes of 128, 192, and 256 bits, it should operate on blocks of 128 input bits, and it should work on a variety of different hardware, for example, eight-bit processors that could be used in smart cards and the 32-bit architecture commonly used in personal computers. Speed and cryptographic strength were also important considerations. In 1998, the cryptographic community was asked to comment on 15 candidate algorithms. Five finalists were chosen: MARS (from IBM), RC6 (from RSA Laboratories), Rijndael (from Joan Daemen and Vincent Rijmen), Serpent (from Ross Anderson, Eli Biham, and Lars Knudsen), and Twofish (from Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson). Eventually, Rijndael was chosen as the Advanced Encryption Standard. The other four algorithms are also very strong, and it is likely that they will used in many future cryptosystems.
As with other block ciphers, Rijndael can be used in several modes, for example, ECB, CBC, CFB, OFB, and CTR (see Section 6.3).
Before proceeding to the algorithm, we answer a very basic question: How do you pronounce Rijndael? We quote from their Web page:
If you’re Dutch, Flemish, Indonesian, Surinamer or South-African, it’s pronounced like you think it should be. Otherwise, you could pronounce it like “Reign Dahl,” “Rain Doll,” “Rhine Dahl”. We’re not picky. As long as you make it sound different from “Region Deal.”
Rijndael is designed for use with keys of lengths 128, 192, and 256 bits. For simplicity, we’ll restrict to 128 bits. First, we give a brief outline of the algorithm, then describe the various components in more detail.
The algorithm consists of 10 rounds (when the key has 192 bits, 12 rounds are used, and when the key has 256 bits, 14 rounds are used). Each round has a round key, derived from the original key. There is also a 0th round key, which is the original key. A round starts with an input of 128 bits and produces an output of 128 bits.
There are four basic steps, called layers, that are used to form the rounds:
The SubBytes Transformation (SB): This nonlinear layer is for resistance to differential and linear cryptanalysis attacks.
The ShiftRows Transformation (SR): This linear mixing step causes diffusion of the bits over multiple rounds.
The MixColumns Transformation (MC): This layer has a purpose similar to ShiftRows.
AddRoundKey (ARK): The round key is XORed with the result of the above layer.
A round is then
Putting everything together, we obtain the following (see also Figure 8.1):
The final round uses the SubBytes, ShiftRows, and AddRoundKey steps but omits MixColumns (this omission will be explained in the decryption section).
The 128-bit output is the ciphertext block.
We now describe the steps in more detail. The 128 input bits are grouped into 16 bytes of eight bits each, call them
These are arranged into a matrix
In the following, we’ll occasionally need to work with the finite field . This is covered in Section 3.11. However, for the present purposes, we only need the following facts. The elements of are bytes, which consist of eight bits. They can be added by XOR. They can also be multiplied in a certain way (i.e., the product of two bytes is again a byte), but this process is more complicated. Each byte except the zero byte has a multiplicative inverse; that is, there is a byte such that . Since we can do arithmetic operations on bytes, we can work with matrices whose entries are bytes.
As a technical point, we note that the model of depends on a choice of irreducible polynomial of degree 8. The choice for Rijndael is . This is also the polynomial used in the examples in Section 3.11. Other choices for this polynomial would presumably give equally good algorithms.
In this step, each of the bytes in the matrix is changed to another byte by Table 8.1, called the S-box.
Write a byte as eight bits: abcdefgh. Look for the entry in the abcd row and efgh column (the rows and columns are numbered from 0 to 15). This entry, when converted to binary, is the output. For example, if the input byte is 10001011, we look in row 8 (the ninth row) and column 11 (the twelfth column). The entry is 61, which is 111101 in binary. This is the output of the S-box.
The output of SubBytes is again a matrix of bytes, let’s call it
The four rows of the matrix are shifted cyclically to the left by offsets of 0, 1, 2, and 3, to obtain
Regard a byte as an element of , as in Section 3.11. Then the output of the ShiftRows step is a matrix with entries in . Multiply this by a matrix, again with entries in , to produce the output , as follows:
The round key, derived from the key in a way we’ll describe later, consists of 128 bits, which are arranged in a matrix consisting of bytes. This is XORed with the output of the MixColumns step:
This is the final output of the round.
The original key consists of 128 bits, which are arranged into a matrix of bytes. This matrix is expanded by adjoining 40 more columns, as follows. Label the first four columns . The new columns are generated recursively. Suppose columns up through have been defined. If is not a multiple of 4, then
If is a multiple of 4, then
where is the transformation of obtained as follows. Let the elements of the column be . Shift these cyclically to obtain . Now replace each of these bytes with the corresponding element in the S-box from the SubBytes step, to get 4 bytes . Finally, compute the round constant
in (recall that we are in the case where is a multiple of 4). Then is the column vector
In this way, columns are generated from the initial four columns.
The round key for the th round consists of the columns
Although the S-box is implemented as a lookup table, it has a simple mathematical description. Start with a byte , where each is a binary bit. Compute its inverse in , as in Section 3.11. If the byte is 00000000, there is no inverse, so we use 00000000 in place of its inverse. The resulting byte represents an eight-dimensional column vector, with the rightmost bit in the top position. Multiply by a matrix and add the column vector to obtain a vector as follows:
The byte is the entry in the S-box.
For example, start with the byte . Its inverse in is , as we calculated in Section 3.11. We now calculate
This yields the byte 00011111. The first four bits 1100 represent 12 in binary and the last four bits 1011 represent 11 in binary. Add 1 to each of these numbers (since the first row and column are numbered 0) and look in the 13th row and 12th column of the S-box. The entry is 31, which in binary is 00011111.
Some of the considerations in the design of the S-box were the following. The map was used to achieve nonlinearity. However, the simplicity of this map could possibly allow certain attacks, so it was combined with multiplication by the matrix and adding the vector, as described previously. The matrix was chosen mostly because of its simple form (note how the rows are shifts of each other). The vector was chosen so that no input ever equals its S-box output or the complement of its S-box output (complementation means changing each 1 to 0 and each 0 to 1).
Each of the steps SubBytes, ShiftRows, MixColumns, and AddRoundKey is invertible:
The inverse of SubBytes is another lookup table, called InvSubBytes.
The inverse of ShiftRows is obtained by shifting the rows to the right instead of to the left, yielding InvShiftRows.
The inverse of MixColumns exists because the matrix used in MixColumns is invertible. The transformation InvMixColumns is given by multiplication by the matrix
AddRoundKey is its own inverse.
The Rijndael encryption consists of the steps
ARK
SB, SR, MC, ARK
...
SB, SR, MC, ARK
SB, SR, ARK.
Recall that MC is missing in the last round.
To decrypt, we need to run through the inverses of these steps in the reverse order. This yields the following preliminary version of decryption:
ARK, ISR, ISB
ARK, IMC, ISR, ISB
...
ARK, IMC, ISR, ISB
ARK.
However, we want to rewrite this decryption in order to make it look more like encryption.
Observe that applying SB then SR is the same as first applying SR then SB. This happens because SB acts one byte at a time and SR permutes the bytes. Correspondingly, the order of ISR and ISB can be reversed.
We also want to reverse the order of ARK and IMC, but this is not possible. Instead, we proceed as follows. Applying MC and then ARK to a matrix is given as
where is a the matrix in MixColumns and is the round key matrix. The inverse is obtained by solving for in terms of , namely, . Therefore, the process is
where . The first arrow is simply InvMixColumns applied to . If we let InvAddRoundKey be XORing with , then we have that the inverse of “MC then ARK” is “ IMC then IARK.” Therefore, we can replace the steps “ARK then IMC” with the steps “IMC then IARK” in the preceding decryption sequence.
We now see that decryption is given by
ARK, ISB, ISR
IMC, IARK, ISB, ISR
...
IMC, IARK, ISB, ISR
ARK.
Regroup the lines to obtain the final version:
Therefore, the decryption is given by essentially the same structure as encryption, but SubBytes, ShiftRows, and MixColumns are replaced by their inverses, and AddRoundKey is replaced by InvAddRoundKey, except in the initial and final steps. Of course, the round keys are used in the reverse order, so the first ARK uses the 10th round key, and the last ARK uses the 0th round key.
The preceding shows why the MixColumns is omitted in the last round. Suppose it had been left in. Then the encryption would start ARK, SB, SR, MC, ARK, ..., and it would end with ARK, SB, SR, MC, ARK. Therefore, the beginning of the decryption would be (after the reorderings) IMC, IARK, ISB, ISR, .... This means the decryption would have an unnecessary IMC at the beginning, and this would have the effect of slowing down the algorithm.
Another way to look at encryption is that there is an initial ARK, then a sequence of alternating half rounds
followed by a final ARK. The decryption is ARK, followed by a sequence of alternating half rounds
followed by a final ARK. From this point of view, we see that a final MC would not fit naturally into any of the half rounds, and it is natural to leave it out.
On eight-bit processors, decryption is not quite as fast as encryption. This is because the entries in the matrix for InvMixColumns are more complex than those for MixColumns, and this is enough to make decryption take around 30% longer than encryption for these processors. However, in many applications, decryption is not needed, for example, when CFB mode (see Section 6.3) is used. Therefore, this is not considered to be a significant drawback.
The fact that encryption and decryption are not identical processes leads to the expectation that there are no weak keys, in contrast to DES (see Exercise 5 in Chapter 7) and several other algorithms.
The Rijndael algorithm is not a Feistel system (see Sections 7.1 and 7.2). In a Feistel system, half the bits are moved but not changed during each round. In Rijndael, all bits are treated uniformly. This has the effect of diffusing the input bits faster. It can be shown that two rounds are sufficient to obtain full diffusion, namely, each of the 128 output bits depends on each of the 128 input bits.
The S-box was constructed in an explicit and simple algebraic way so as to avoid any suspicions of trapdoors built into the algorithm. The desire was to avoid the mysteries about the S-boxes that haunted DES. The Rijndael S-box is highly nonlinear, since it is based on the mapping in . It is excellent at resisting differential and linear cryptanalysis, as well as more recently studied methods called interpolation attacks.
The ShiftRows step was added to resist two recently developed attacks, namely truncated differentials and the Square attack (Square was a predecessor of Rijndael).
The MixColumns causes diffusion among the bytes. A change in one input byte in this step always results in all four output bytes changing. If two input bytes are changed, at least three output bytes are changed.
The Key Schedule involves nonlinear mixing of the key bits, since it uses the S-box. The mixing is designed to resist attacks where the cryptanalyst knows part of the key and tries to deduce the remaining bits. Also, it aims to ensure that two distinct keys do not have a large number of round keys in common. The round constants are used to eliminate symmetries in the encryption process by making each round different.
The number of rounds was chosen to be 10 because there are attacks that are better than brute force up to six rounds. No known attack beats brute force for seven or more rounds. It was felt that four extra rounds provide a large enough margin of safety. Of course, the number of rounds could easily be increased if needed.
Suppose the key for round 0 in AES consists of 128 bits, each of which is 0.
Show that the key for the first round is , where
Show that (Hint: This can be done without computing explicitly).
Suppose the key for round 0 in AES consists of 128 bits, each of which is 1.
Show that the key for the first round is , where
Note that = the complement of (the complement can be obtained by ing with a string of all 1s).
Show that and that (Hints: is a string of all 1s. Also, the relation might be useful.)
Let be a function from binary strings (of a fixed length ) to binary strings. For the purposes of this problem, let’s say that has the equal difference property if the following is satisfied: Whenever are binary strings of length that satisfy , then
Show that if and for all , then has the equal difference property.
Show that the ShiftRows Transformation, the MixColumns Transformation, and the RoundKey Addition have the equal difference property.
Suppose we remove all SubBytes Transformation steps from the AES algorithm. Show that the resulting AES encryption would then have the equal difference property defined in Exercise 3.
Suppose we are in the situation of part (a), with all SubBytes Transformation steps removed. Let and be two 128-bit plaintext blocks and let and be their encryptions under this modified AES scheme. Show that equals the result of encrypting using only the ShiftRows and MixColumns Transformations (that is, both the RoundKey Addition and the SubBytes Transformation are missing). In particular, is independent of the key.
Suppose we are in the situation of part (a), and Eve knows and for some 128-bit string . Describe how she can decrypt any message (your solution should be much faster than using brute force or making a list of all encryptions). (Remark: This shows that the SubBytes transformation is needed to prevent the equal difference property. See also Exercise 5.)
Let , , , . Let denote the SubBytes Transformation of . Show that
Conclude that the SubBytes Transformation is not an affine map (that is, a map of the form ) from to . (Hint: See Exercise 3(a).)
Your friend builds a very powerful computer that uses brute force to find a 56-bit DES key in 1 hour, so you make an even better machine that can try AES keys in 1 second. How long will this machine take to try all AES keys?
Alice wants to send a message to Bob, but they have not had previous contact and they do not want to take the time to send a courier with a key. Therefore, all information that Alice sends to Bob will potentially be obtained by the evil observer Eve. However, it is still possible for a message to be sent in such a way that Bob can read it but Eve cannot.
With all the previously discussed methods, this would be impossible. Alice would have to send a key, which Eve would intercept. She could then decrypt all subsequent messages. The possibility of the present scheme, called a public key cryptosystem, was first publicly suggested by Diffie and Hellman in their classic 1976 paper [Diffie-Hellman]. However, they did not yet have a practical implementation (although they did present an alternative key exchange procedure that works over public channels; see Section 10.4). In the next few years, several methods were proposed. The most successful, based on the idea that factorization of integers into their prime factors is hard, was proposed by Rivest, Shamir, and Adleman in 1977 and is known as the RSA algorithm.
It had long been claimed that government cryptographic agencies had discovered the RSA algorithm several years earlier, but secrecy rules prevented them from releasing any evidence. Finally, in 1997, documents released by CESG, a British cryptographic agency, showed that in 1970, James Ellis had discovered public key cryptography, and in 1973, Clifford Cocks had written an internal document describing a version of the RSA algorithm in which the encryption exponent (see the discussion that follows) was the same as the modulus .
Here is how the RSA algorithm works. Bob chooses two distinct large primes and and multiplies them together to form
He also chooses an encryption exponent such that
He sends the pair to Alice but keeps the values of and secret. In particular, Alice, who could possibly be an enemy of Bob, never needs to know and to send her message to Bob securely. Alice writes her message as a number . If is larger than , she breaks the message into blocks, each of which is less than . However, for simplicity, let’s assume for the moment that . Alice computes
and sends to Bob. Since Bob knows and , he can compute and therefore can find the decryption exponent with
As we’ll see later,
so Bob can read the message.
We summarize the algorithm in the following table.
Bob chooses
Then
Let the encryption exponent be
The values of and are sent to Alice.
Alice’s message is cat. We will depart from our earlier practice of numbering the letters starting with ; instead, we start the numbering at and continue through . We do this because, in the previous method, if the letter appeared at the beginning of a message, it would yield a message number starting with , so the would disappear.
The message is therefore
Alice computes
She sends to Bob.
Since Bob knows and , he knows . He uses the extended Euclidean algorithm (see Section 3.2) to compute such that
The answer is
Bob computes
so he obtains the original message.
For more examples, see Examples 24–30 in the Computer Appendices.
There are several aspects that need to be explained, but perhaps the most important is why . Recall Euler’s theorem (Section 3.6): If , then . In our case, . Suppose . This is very likely the case; since and are large, probably has neither as a factor. Since , we can write for some integer . Therefore,
We have shown that Bob can recover the message. If , Bob still recovers the message. See Exercise 37.
What does Eve do? She intercepts . She does not know . We assume that Eve has no way of factoring . The obvious way of computing requires knowing . We show later that this is equivalent to knowing and . Is there another way? We will show that if Eve can find , then she can probably factor . Therefore, it is unlikely that Eve finds the decryption exponent .
Since Eve knows , why doesn’t she simply take the th root of ? This works well if we are not working mod but is very difficult in our case. For example, if you know that , you cannot calculate the cube root of 3, namely on your calculator and then reduce mod 85. Of course, a case-by-case search would eventually yield , but this method is not feasible for large .
How does Bob choose and ? They should be chosen at random, independently of each other. How large depends on the level of security needed, but it seems that they should have at least 300 digits. For reasons that we discuss later, it is perhaps best if they are of slightly different lengths. When we discuss primality testing, we’ll see that finding such primes can be done fairly quickly (see also Section 3.6). A few other tests should be done on and to make sure they are not bad. For example, if has only small prime factors, then is easy to factor by the method (see Section 9.3), so should be rejected and replaced with another prime.
Why does Bob require ? Recall (see Section 3.3) that has a solution if and only if . Therefore, this condition is needed in order for to exist. The extended Euclidean algorithm can be used to compute quickly. Since is even, cannot be used; one might be tempted to use . However, there are dangers in using small values of (see Section 9.2, Computer Problem 14, and Section 23.3), so something larger is usually recommended. Also, if is a moderately large prime, then there is no difficulty ensuring that . It is now generally recommended that . Among the data collected for [Lenstra2012 et al.] is the distribution of RSA encryption exponents that is given in Table 9.1.
In the encryption process, Alice calculates . Recall that this can be done fairly quickly and without large memory, for example, by successive squaring. See Section 3.5. This is definitely an advantage of modular arithmetic: If Alice tried to calculate first, then reduce mod , it is possible that recording would overflow her computer’s memory. Similarly, the decryption process of calculating can be done efficiently. Therefore, all the operations needed for encryption and decryption can be done quickly (i.e., in time a power of ). The security is provided by the assumption that cannot be factored.
We made two claims. We justify them here. Recall that the point of these two claims was that finding or finding the decryption exponent is essentially as hard as factoring . Therefore, if factoring is hard, then there should be no fast, clever way of finding .
Claim 1: Suppose is the product of two distinct primes. If we know and , then we can quickly find and .
Note that
Therefore, we know and . The roots of the polynomial
are and , but they can also be calculated by the quadratic formula:
This yields and .
For example, suppose and we know that . Consider the quadratic equation
The roots are
For another example, see Example 31 in the Computer Appendices.
Claim 2: If we know and , then we can probably factor .
In the discussion of factorization methods in Section 9.4, we show that if we have an exponent such that for several with , then we can probably factor . Since is a multiple of , say , we have
whenever . The factorization method can now be applied.
For an example, see Example 32 in the Computer Appendices.
Claim 2’: If is small or medium-sized (for example, if has several fewer digits than ) and we know , then we can factor .
We use the following procedure:
Compute (that is, round up to the nearest integer).
Compute
Solve the quadratic equation
The solutions are and .
Let and . We discover that . Compute
Therefore, . Then
The roots of the equation
are and , and we can check that .
Why does this work? We know that , so we write . Since , we know that
so . We have
Usually, both and are approximately . In practice, and therefore (which is less than ) are much smaller than . Therefore, is much smaller than , which means that and it rounds off to .
Once we have , we use to solve for . As we have already seen, once we know and , we can find and by solving the quadratic equation.
One way the RSA algorithm can be used is when there are several banks, for example, that want to be able to send financial data to each other. If there are several thousand banks, then it is impractical for each pair of banks to have a key for secret communication. A better way is the following. Each bank chooses integers and as before. These are then published in a public book. Suppose bank A wants to send data to bank B. Then A looks up B’s and and uses them to send the message. In practice, the RSA algorithm is not quite fast enough for sending massive amounts of data. Therefore, the RSA algorithm is often used to send a key for a faster encryption method such as AES.
PGP (= Pretty Good Privacy) used to be a standard method for encrypting email. When Alice sends an email message to Bob, she first signs the message using a digital signature algorithm such as those discussed in Chapter 13. She then encrypts the message using a block cipher such as triple DES or AES (other choices are IDEA or CAST-128) with a randomly chosen 128-bit key (a new random key is chosen for each transmission). She then encrypts this key using Bob’s public RSA key (other public key methods can also be used). When Bob receives the email, he uses his private RSA exponent to decrypt the random key. Then he uses this random key to decrypt the message, and he checks the signature to verify that the message is from Alice. For more discussion of PGP, see Section 15.6.
In practice, the RSA algorithm has proven to be effective, as long as it is implemented correctly. We give a few possible implementation mistakes in the Exercises. Here are a few other potential difficulties. For more about attacks on RSA, see [Boneh].
Let have digits. If we know the first , or the last , digits of , we can efficiently factor .
In other words, if and have 300 digits, and we know the first 150 digits, or the last 150 digits, of , then we can factor . Therefore, if we choose a random starting point to choose our prime , the method should be such that a large amount of is not predictable. For example, suppose we take a random 150-digit number and test numbers of the form , , for primality until we find a prime (which should happen for ). An attacker who knows that this method is used will know 147 of the last 150 digits (they will all be 0 except for the last three or four digits). Trying the method of the theorem for the various values of will eventually lead to the factorization of .
For details of the preceding result, see [Coppersmith2]. A related result is the following.
Suppose is an RSA public key and has digits. Let be the decryption exponent. If we have at least the last digits of , we can efficiently find in time that is linear in .
This means that the time to find is bounded as a function linear in . If is small, it is therefore quite fast to find when we know a large part of . If is large, perhaps around , the theorem is no better than a case-by-case search for . For details, see [Boneh et al.].
Low encryption or decryption exponents are tempting because they speed up encryption or decryption. However, there are certain dangers that must be avoided. One pitfall of using is given in Computer Problem 14. Another difficulty is discussed in Chapter 23 (Lattice Methods). These problems can be avoided by using a somewhat higher exponent. One popular choice is . This is prime, so it is likely that it is relatively prime to . Since it is one more than a power of 2, exponentiation to this power can be done quickly: To calculate , square sixteen times, then multiply the result by .
The decryption exponent should of course be chosen large enough that brute force will not find it. However, even more care is needed, as the following result shows. One way to obtain desired properties of is to choose first, then find with .
Suppose Bob wants to be able to decrypt messages quickly, so he chooses a small value of . The following theorem of M. Wiener [Wiener] shows that often Eve can then find easily. In practice, if the inequalities in the hypotheses of the proposition are weakened, then Eve can still use the method to obtain in many cases. Therefore, it is recommended that be chosen fairly large.
Suppose are primes with . Let and let satisfy . If , then can be calculated quickly (that is, in time polynomial in ).
Proof. Since , we have . Therefore, since ,
Write for some integer . Since , we have
so . Therefore,
Also, since , we have . Dividing by yields
since by assumption.
We now need a result about continued fractions. Recall from Section 3.12 that if is a positive real number and and are positive integers with
then arises from the continued fraction expansion of . Therefore, in our case, arises from the continued fraction expansion of . Therefore, Eve does the following:
Computes the continued fraction of . After each step, she obtains a fraction .
Eve uses and to compute . (Since , this value of is a candidate for .)
If is not an integer, she proceeds to the next step of the continued fraction.
If is an integer, then she finds the roots of . (Note that this is possibly the equation from earlier.) If and are integers, then Eve has factored . If not, then Eve proceeds to the next step of the continued fraction algorithm.
Since the number of steps in the continued fraction expansion of is at most a constant times , and since the continued fraction algorithm stops when the fraction is reached, the algorithm terminates quickly. Therefore, Eve finds the factorization of quickly.
Recall that the rational approximations to a number arising from the continued fraction algorithm are alternately larger than and smaller than . Since , we only need to consider every second fraction arising from the continued fraction.
What happens if Eve reaches without finding the factorization of ? This means that the hypotheses of the proposition are not satisfied. However, it is possible that sometimes the method will yield the factorization of even when the hypotheses fail.
Let and . The continued fraction of is
The first fraction is , so we try . Since must be odd, we discard this possibility.
By the remark, we may jump to the third fraction:
Again, we discard this since must be odd.
The fifth fraction is . This gives , which is not an integer.
The seventh fraction is This gives as the candidate for . The roots of
are and , to several decimal places of accuracy. Since
we have factored .
A common use of RSA is to transmit keys for use in DES, AES, or other symmetric cryptosystems. However, a naive implementation could lead to a loss of security. Suppose a 56-bit DES key is written as a number . This is encrypted with RSA to obtain . Although is small, the ciphertext is probably a number of the same size as , so perhaps around 200 digits. However, Eve attacks the system as follows. She makes two lists:
for all with .
for all with .
She looks for a match between an element on the first list and an element on the second list. If she finds one, then she has for some . This yields
so . Is this attack likely to succeed? Suppose is the product of two integers and , both less than . Then these will yield a match for Eve. Not every will have this property, but many values of are the product of two integers less than . For these, Eve will obtain .
This attack is much more efficient than trying all possibilities for , which is nearly impossible on one computer, and would take a long time even with several computers working in parallel. In the present attack, Eve needs to compute and store a list of length , then compute the elements on the other list and check each one against the first list. Therefore, Eve performs approximately computations (and compares with the list up to times). This is easily possible on a single computer. For more on this attack, see [Boneh-Joux-Nguyen].
It is easy to prevent this attack. Instead of using a small value of , adjoin some random digits to the beginning and end of so as to form a much longer plaintext. When Bob decrypts the ciphertext, he simply removes these random digits and obtains .
A more sophisticated method of preprocessing the plaintext, namely Optimal Asymmetric Encryption Padding (OAEP), was introduced by Bellare and Rogaway [Bellare-Rogaway2] in 1994. Suppose Alice wants to send a message to Bob, whose RSA public key is , where has bits. Two positive integers and are specified in advance, with . Alice’s message is allowed to have bits. Typical values are , , . Let be a function that inputs strings of bits and outputs strings of bits. Let be a function that inputs bits and outputs bits. The functions and are usually constructed from hash functions (see Chapter 11 for a discussion of hash functions). To encrypt , Alice first expands it to length by adjoining zero bits. The result is denoted . She then chooses a random string of bits and computes
If the concatenation is a binary number larger than , Alice chooses a new random number and computes new values for and . As soon as she obtains (this has a probability of at least 1/2 of happening for each , as long as produces fairly random outputs), she forms the ciphertext
To decrypt a ciphertext , Bob uses his private RSA decryption exponent to compute . The result is written in the form
where has bits and has bits. Bob then computes
The correctness of this decryption can be justified as follows. If the ciphertext is the encryption of , then
Therefore,
and
Bob removes the zero bits from the end of and obtains . Also, Bob has check on the integrity of the ciphertext. If there are not zeros at the end, then the ciphertext does not correspond to a valid encryption.
This method is sometimes called plaintext-aware encryption. Note that the padding with depends on the message and on the random parameter . This makes chosen ciphertext attacks on the system more difficult. It also is used for ciphertext indistinguishability. See Section 4.5.
For discussion of the security of OAEP, see [Shoup].
Another type of attack on RSA and similar systems was discovered by Paul Kocher in 1995, while he was an undergraduate at Stanford. He showed that it is possible to discover the decryption exponent by carefully timing the computation times for a series of decryptions. Though there are ways to thwart the attack, this development was unsettling. There had been a general feeling of security since the mathematics was well understood. Kocher’s attack demonstrated that a system could still have unexpected weaknesses.
Here is how the timing attack works. Suppose Eve is able to observe Bob decrypt several ciphertexts . She times how long this takes for each . Knowing each and the time required for it to be decrypted will allow her to find the decryption exponent . But first, how could Eve obtain such information? There are several situations where encrypted messages are sent to Bob and his computer automatically decrypts and responds. Measuring the response times suffices for the present purposes.
We need to assume that we know the hardware being used to calculate . We can use this information to calculate the computation times for various steps that potentially occur in the process.
Let’s assume that is computed by an algorithm given in Exercise 56 in Chapter 3, which is as follows:
Let be written in binary (for example, when , we have ). Let and be integers. Perform the following procedure:
Start with and .
If , let . If , let .
Let .
If , stop. If , add 1 to and go to (2).
Then .
Note that the multiplication occurs only when the bit . In many situations, there is a reasonably large variation in how long this multiplication takes. We assume this is the case here.
Before we continue, we need a few facts from probability. Suppose we have a random process that produces real numbers as outputs. For us, will be the time it takes for the computer to complete a calculation, given a random input . The mean is the average value of these outputs. If we record outputs , the mean should be approximately . The variance for the random process is approximated by
The standard deviation is the square root of the variance and gives a measure of how much variation there is in the values of the ’s.
The important fact we need is that when two random processes are independent, the variance for the sum of their outputs is the sum of the variances of the two processes. For example, we will break the computation done by the computer into two independent processes, which will take times and . The total time will be . Therefore, should be approximately .
Now assume Eve knows ciphertexts and the times that it took to compute each . Suppose she knows bits of the exponent . Since she knows the hardware being used, she knows how much time was used in calculating in the preceding algorithm. Therefore, she knows, for each , the time that it takes to compute .
Eve wants to determine . If , a multiplication will take place for each ciphertext that is processed. If , there is no such multiplication.
Let be the amount of time it takes the computer to perform the multiplication , though Eve does not yet know whether this multiplication actually occurs. Let . Eve computes and . If , then Eve concludes that . If not, . After determining , she proceeds in the same manner to find all the bits.
Why does this work? If the multiplication occurs, is the amount of time it takes the computer to complete the calculation after the multiplication. It is reasonable to assume and are outputs that are independent of each other. Therefore,
If the multiplication does not occur, is the amount of time for an operation unrelated to the computation, so it is reasonable to assume and are independent. Therefore,
Note that we couldn’t use the mean in place of the variance, since the mean of would be negative, so the last inequality would not hold. All that can be deduced from the mean is the total number of nonzero bits in the binary expansion of .
The preceding gives a fairly simple version of the method. In practice, various modifications would be needed, depending on the specific situation. But the general strategy remains the same. For more details, see [Kocher]. For more on timing attacks, see [Crosby et al.].
A similar attack on RSA works by measuring the power consumed during the computations. See [Kocher et al.]. Another method, called acoustic cryptanalysis, obtains information from the high-pitched noises emitted by the electronic components of a computer during its computations. See [Genkin et al.]. Attacks such as these and the timing attack can be prevented by appropriate design features in the physical implementation.
Timing attacks, power analysis, and acoustic cryptanalysis are examples of side-channel attacks, where the attack is on the implementation rather than on the basic cryptographic algorithm.
Suppose we have an integer of 300 digits that we want to test for primality. We know by Exercise 7 in Chapter 3 that one way is to divide by all the primes up to its square root. What happens if we try this? There are around primes less than . This is significantly more than the number of particles in the universe. Moreover, if the computer can handle primes per second, the calculation would take around years. (It’s been suggested that you could go sit on the beach for 20 years, then buy a computer that is 1000 times faster, which would cut the runtime down to years – a very large savings!) Clearly, better methods are needed. Some of these are discussed in this section.
A very basic idea, one that is behind many factorization methods, is the following.
Let be an integer and suppose there exist integers and with , but . Then is composite. Moreover, gives a nontrivial factor of .
Proof. Let . If then , which is assumed not to happen. Suppose . The Proposition in Subsection 3.3.1 says that if and if , then . In our case, let , let , and let . Then . If , then . This says that , which contradicts the assumption that . Therefore, , so is a nontrivial factor of .
Since , but , we know that 35 is composite. Moreover, is a nontrivial factor of 35.
It might be surprising, but factorization and primality testing are not the same. It is much easier to prove a number is composite than it is to factor it. There are many large integers that are known to be composite but that have not been factored. How can this be done? We give a simple example. We know by Fermat’s theorem that if is prime, then . Let’s use this to show 35 is not prime. By successive squaring, we find (congruences are mod 35)
Therefore,
Fermat’s theorem says that 35 cannot be prime, so we have proved 35 to be composite without finding a factor.
The same reasoning gives us the following.
Let be an integer. Choose a random integer with . If , then is composite. If , then is probably prime.
Although this and similar tests are usually called “primality tests,” they are actually “compositeness tests,” since they give a completely certain answer only in the case when is composite. The Fermat test is quite accurate for large . If it declares a number to be composite, then this is guaranteed to be true. If it declares a number to be probably prime, then empirical results show that this is very likely true. Moreover, since modular exponentiation is fast, the Fermat test can be carried out quickly.
Recall that modular exponentiation is accomplished by successive squaring. If we are careful about how we do this successive squaring, the Fermat test can be combined with the Basic Principle to yield the following stronger result.
Let be an odd integer. Write with odd. Choose a random integer with . Compute . If , then stop and declare that is probably prime. Otherwise, let . If , then is composite (and gives a nontrivial factor of ). If , then stop and declare that is probably prime. Otherwise, let . If , then is composite. If , then stop and declare that is probably prime. Continue in this way until stopping or reaching . If , then is composite.
Let . Then , so and . Let . Then
Since , we conclude that 561 is composite. Moreover, , which is a nontrivial factor of 561.
If is composite and , then we say that is a pseudoprime for the base . If and are such that passes the Miller-Rabin test, we say that is a strong pseudoprime for the base . We showed in Section 3.6 that , so 561 is a pseudoprime for the base 2. However, the preceding calculation shows that 561 is not a strong pseudoprime for the base 2. For a given base, strong pseudoprimes are much more rare than pseudoprimes.
Up to , there are 455052511 primes. There are 14884 pseudoprimes for the base 2, and 3291 strong pseudoprimes for the base 2. Therefore, calculating will fail to recognize a composite in this range with probability less than 1 out of 30 thousand, and using the Miller-Rabin test with will fail with probability less than 1 out of 100 thousand.
It can be shown that the probability that the Miller-Rabin test fails to recognize a composite for a randomly chosen is at most . In fact, it fails much less frequently than this. See [Damgå rd et al.]. If we repeat the test 10 times, say, with randomly chosen values of , then we expect that the probability of certifying a composite number as prime is at most . In practice, using the test for a single is fairly accurate.
Though strong pseudoprimes are rare, it has been proved (see [Alford et al.]) that, for any finite set of bases, there are infinitely many integers that are strong pseudoprimes for all . The first strong pseudoprime for all the bases is 3215031751. There is a 337-digit number that is a strong pseudoprime for all bases that are primes .
Suppose we need to find a prime of around 300 digits. The prime number theorem asserts that the density of primes around is approximately . When , this gives a density of around . Since we can skip the even numbers, this can be raised to . Pick a random starting point, and throw out the even numbers (and multiples of other small primes). Test each remaining number in succession by the Miller-Rabin test. This will tend to eliminate all the composites. On average, it will take less than 400 uses of the Miller-Rabin test to find a likely candidate for a prime, so this can be done fairly quickly. If we need to be completely certain that the number in question is prime, there are more sophisticated primality tests that can test a number of 300 digits in a few seconds.
Why does the test work? Suppose, for example, that . This means that . Apply the Basic Principle from before. Either , or and is composite. In the latter case, gives a nontrivial factor of . In the former case, the algorithm would have stopped by the previous step. If we reach , we have computed . The square of this is , which must be if is prime, by Fermat’s theorem. Therefore, if is prime, . All other choices mean that is composite. Moreover, if , then, if we didn’t stop at an earlier step, with . This means that is composite (and we can factor ).
In practice, if is composite, usually we reach and it is not . In fact, usually . This means that Fermat’s test says is not prime.
For example, let and . Since , Fermat’s theorem and also the Miller-Rabin test say that 299 is not prime (without factoring it). The reason this happens is the following. Note that . An easy calculation shows that and no smaller exponent works. In fact, if and only if is a multiple of 12. Since 298 is not a multiple of 12, we have , and therefore also . Similarly, if and only if is a multiple of 11, from which we can again deduce that . If Fermat’s theorem (and the Miller-Rabin test) were to give us the wrong answer in this case, we would have needed to be a multiple of .
Consider the general case , a product of two primes. For simplicity, consider the case where and suppose if and only if . This means that is a primitive root mod ; there are such mod . Since , we have
Therefore, by our choice of , which implies that . Similar reasoning shows that usually for many other choices of , too.
But suppose we are in a case where . What happens? Let’s look at the example of . Since , we consider what is happening to the sequence mod 3, mod 11, and mod 17:
Since , we have mod all three primes. But there is no reason that is the first time we get mod a particular prime. We already have mod 3 and mod 11, but we have to wait for when working mod 17. Therefore, mod 3, mod 11, and mod 17, but is congruent to 1 only mod 3 and mod 11. Therefore, contains the factors 3 and 11, but not 17. This is why finds the factor 33 of 561. The reason we could factor 561 by this method is that the sequence reached 1 mod the primes not all at the same time.
More generally, consider the case (a product of several primes is similar) and suppose . As pointed out previously, it is very unlikely that this is the case; but if it does happen, look at what is happening mod and mod . It is likely that the sequences and reach and then 1 at different times, just as in the example of 561. In this case, we will be have but for some ; therefore, but . Therefore, we’ll be able to factor .
The only way that can pass the Miller-Rabin test is to have and also to have the sequences and reach 1 at the same time. This rarely happens.
Another primality test of a nature similar to the Miller-Rabin test is the following, which uses the Jacobi symbol (see Section 3.10).
Let be an odd integer. Choose several random integers with . If
for some , then is composite. If
for all of these random , then is probably prime.
Note that if is prime, then the test will declare to be a probable prime. This is because of Part 2 of the second Proposition in Section 3.10.
The Jacobi symbol can be evaluated quickly, as in Section 3.10. The modular exponentiation can also be performed quickly.
For example,
so 15 is not prime. As in the Miller-Rabin tests, we usually do not get for . Here is a case where it happens:
However, the Solovay-Strassen test says that 341 is composite.
Both the Miller-Rabin and the Solovay-Strassen tests work quickly in practice, but, when is prime, they do not give rigorous proofs that is prime. There are tests that actually prove the primality of , but they are somewhat slower and are used only when it is essential that the number be proved to be prime. Most of these methods are probabilistic, in the sense that they work with very high probability in any given case, but success is not guaranteed. In 2002, Agrawal, Kayal, and Saxena [Agrawal et al.] gave what is known as a deterministic polynomial time algorithm for deciding whether or not a number is prime. This means that the computation time is always, rather than probably, bounded by a constant times a power of . This was a great theoretical advance, but their algorithm has not yet been improved to the point that it competes with the probabilistic algorithms.
For more on primality testing and its history, see [Williams].
We now turn to factoring. The basic method of dividing an integer by all primes is much too slow for most purposes. For many years, people have worked on developing more efficient algorithms. We present some of them here. In Chapter 21, we’ll also cover a method using elliptic curves, and in Chapter 25, we’ll show how a quantum computer, if built, could factor efficiently.
One method, which is also too slow, is usually called the Fermat factorization method. The idea is to express as a difference of two squares: . Then gives a factorization of . For example, suppose we want to factor . Compute , , , , until we find a square. In this case, . Therefore,
The Fermat method works well when is the product of two primes that are very close together. If , it takes steps to find the factorization. But if and are two randomly selected 300-digit primes, it is likely that will be very large, probably around 300 digits, too. So Fermat factorization is unlikely to work. Just to be safe, however, the primes for an RSA modulus are often chosen to be of slightly different sizes.
We now turn to more modern methods. If one of the prime factors of has a special property, it is sometimes easier to factor . For example, if divides and has only small prime factors, the following method is effective. It was invented by Pollard in 1974.
Choose an integer . Often is used. Choose a bound . Compute as follows. Let and . Then . Let . If , we have found a nontrivial factor of .
Suppose is a prime factor of such that has only small prime factors. Then it is likely that will divide , say . By Fermat’s theorem, , so will occur in the greatest common divisor of and . If is another prime factor of , it is unlikely that , unless also has only small prime factors. If , not all is lost. In this case, we have an exponent (namely ) and an such that . There is a good chance that the method (explained later in this section) will factor . Alternatively, we could choose a smaller value of and repeat the calculation.
For an example, see Example 34 in the Computer Appendices.
How do we choose the bound ? If we choose a small , then the algorithm will run quickly but will have a very small chance of success. If we choose a very large , then the algorithm will be very slow. The actual value used will depend on the situation at hand.
In the applications, we will use integers that are products of two primes, say , but that are hard to factor. Therefore, we should ensure that has at least one large prime factor. This is easy to accomplish. Suppose we want to have around 300 digits. Choose a large prime , perhaps around . Look at integers of the form , with running through some integers around . Test for primality by the Miller-Rabin test, as before. On the average, this should produce a desired value of in less than 400 steps. Now choose a large prime and follow the same procedure to obtain . Then will be hard to factor by the method.
The elliptic curve factorization method (see Section 21.3) gives a generalization of the method. However, it uses some random numbers near and only requires at least one of them to have only small prime factors. This allows the method to detect many more primes , not just those where has only small prime factors.
Since it is the basis of the best current factorization methods, we repeat the following result from Section 9.4.
Let be an integer and suppose there exist integers and with , but . Then is composite. Moreover, gives a nontrivial factor of .
For an example, see Example 33 in the Computer Appendices.
How do we find the numbers and ? Let’s suppose we want to factor . Observe the following:
If we multiply the relations, we obtain
Since , we now can factor 3837523 by calculating
The other factor is .
Here is a way of looking at the calculations we just did. First, we generate squares such that when they are reduced mod they can be written as products of small primes (in the present case, primes less than 20). This set of primes is called our factor base. We’ll discuss how to generate such squares shortly. Each of these squares gives a row in a matrix, where the entries are the exponents of the primes 2, 3, 5, 7, 11, 13, 17, 19. For example, the relation gives the row 6, 2, 0, 0, 1, 0, 0, 0.
In addition to the preceding relations, suppose that we have also found the following relations:
We obtain the matrix
Now look for linear dependencies mod 2 among the rows. Here are three of them:
When we have such a dependency, the product of the numbers yields a square. For example, these three yield
Therefore, we have for various values of and . If , then yields a nontrivial factor of . If , then usually or , so we don’t obtain a factorization. In our three examples, we have
, but
and
and
We now return to the basic question: How do we find the numbers 9398, 19095, etc.? The idea is to produce squares that are slightly larger than a multiple of , so they are small mod . This means that there is a good chance they are products of small primes. An easy way is to look at numbers of the form for small and for various values of . Here denotes the greatest integer less than or equal to . The square of such a number is approximately , which is approximately mod . As long as is not too large, this number is fairly small, hence there is a good chance it is a product of small primes.
In the preceding calculation, we have and , for example.
The method just used is the basis of many of the best current factorization methods. The main step is to produce congruence relations
An improved version of the above method is called the quadratic sieve. A recent method, the number field sieve, uses more sophisticated techniques to produce such relations and is somewhat faster in many situations. See [Pomerance] for a description of these two methods and for a discussion of the history of factorization methods. See also Exercise 52.
Once we have several congruence relations, they are put into a matrix, as before. If we have more rows than columns in the matrix, we are guaranteed to have a linear dependence relation mod 2 among the rows. This leads to a congruence . Of course, as in the case of 1st + 5th + 6th considered previously, we might end up with , in which case we don’t obtain a factorization. But this situation is expected to occur at most half the time. So if we have enough relations – for example, if there are several more rows than columns – then we should have a relation that yields with . In this case is a nontrivial factor of .
In the last half of the twentieth century, there was dramatic progress in factoring. This was partly due to the development of computers and partly due to improved algorithms. A major impetus was provided by the use of factoring in cryptology, especially the RSA algorithm. Table 9.2 gives the factorization records (in terms of the number of decimal digits) for various years.
On the surface, the Miller-Rabin test looks like it might factor quite often; but what usually happens is that is reached without ever having . The problem is that usually . Suppose, on the other hand, that we have some exponent , maybe not , such that for some with . Then it is often possible to factor .
Suppose we have an exponent and an integer such that . Write with odd. Let , and successively define for . If , then stop; the procedure has failed to factor . If, for some , we have , stop; the procedure has failed to factor . If, for some , we have but , then gives a nontrivial factor of .
Of course, if we take , then any works. But then , so the method fails. But if and are found by some reasonably sensible method, there is a good chance that this method will factor .
This looks very similar to the Miller-Rabin test. The difference is that the existence of guarantees that we have for some , which doesn’t happen as often in the Miller-Rabin situation. Trying a few values of has a very high probability of factoring .
Of course, we might ask how we can find an exponent . Generally, this seems to be very difficult, and this test cannot be used in practice. However, it is useful in showing that knowing the decryption exponent in the RSA algorithm allows us to factor the modulus. Moreover, if a quantum computer is built, it will perform factorizations by finding such an exponent via its unique quantum properties. See Chapter 25.
For an example for how this method is used in analyzing RSA, see Example 32 in the Computer Appendices.
When the RSA algorithm was first made public in 1977, Rivest, Shamir, and Adleman made the following challenge.
Let the RSA modulus be
and let be the encryption exponent. The ciphertext is
Find the message.
The only known way of finding the plaintext is to factor . In 1977, it was estimated that the then-current factorization methods would take years to do this, so the authors felt safe in offering $100 to anyone who could decipher the message before April 1, 1982. However, techniques have improved, and in 1994, Atkins, Graff, Lenstra, and Leyland succeeded in factoring .
They used 524339 “small” primes, namely those less than 16333610, plus they allowed factorizations to include up to two “large” primes between 16333610 and . The idea of allowing large primes is the following: If one large prime appears in two different relations, these can be multiplied to produce a relation with squared. Multiplying by yields a relation involving only small primes. In the same way, if there are several relations, each with the same two large primes, a similar process yields a relation with only small primes. The “birthday paradox” (see Section 12.1) implies that there should be several cases where a large prime occurs in more than one relation.
Six hundred people, with a total of 1600 computers working in spare time, found congruence relations of the desired type. These were sent by e-mail to a central machine, which removed repetitions and stored the results in a large matrix. After seven months, they obtained a matrix with 524339 columns and 569466 rows. Fortunately, the matrix was sparse, in the sense that most of the entries of the matrix were 0s, so it could be stored efficiently. Gaussian elimination reduced the matrix to a nonsparse matrix with 188160 columns and 188614 rows. This took a little less than 12 hours. With another 45 hours of computation, they found 205 dependencies. The first three yielded the trivial factorization of , but the fourth yielded the factors
Computing gave the decryption exponent
Calculating yielded the plaintext message
which, when changed back to letters using , yielded
(a squeamish ossifrage is an overly sensitive hawk; the message was chosen so that no one could decrypt the message by guessing the plaintext and showing that it encrypted to the ciphertext). For more details of this factorization, see [Atkins et al.]. If you want to see how the decryption works once the factorization is known, see Example 28 in the Computer Appendices.
Countries A and B have signed a nuclear test ban treaty. Now each wants to make sure the other doesn’t test any bombs. How, for example, is country A going to use seismic data to monitor country B? Country A wants to put sensors in B, which then send data back to A. Two problems arise.
Country A wants to be sure that Country B doesn’t modify the data.
Country B wants to look at the message before it’s sent to be sure that nothing else, such as espionage data, is being transmitted.
These seemingly contradictory requirements can be met by reversing RSA. First, A chooses to be the product of two large primes and chooses encryption and decryption exponents and . The numbers and are given to B, but , , and are kept secret. The sensor (it’s buried deep in the ground and is assumed to be tamper proof) collects the data and uses to encrypt to . Both and are sent first to country , which checks that . If so, it knows that the encrypted message corresponds to the data , and forwards the pair , to A. Country A then checks that , also. If so, A can be sure that the number has not been modified, since if is chosen, then solving for is the same as decrypting the RSA message , and this is believed to be hard to do. Of course, B could choose a number first, then let , but then would probably not be a meaningful message, so A would realize that something had been changed.
The preceding method is essentially the RSA signature scheme, which will be studied in Section 13.1.
In 1976, Diffie and Hellman described the concept of public key cryptography, though at that time no realizations of the concept were publicly known (as mentioned in the introduction to this chapter, Clifford Cocks of the British cryptographic agency CESG had invented a secret version of RSA in 1973). In this section, we give the general theory of public key systems.
There are several implementations of public key cryptography other than RSA. In later chapters we describe three of them. One is due to ElGamal and is based on the difficulty of finding discrete logarithms. A second is NTRU and involves lattice methods. The third is due to McEliece and uses error correcting codes. There are also public key systems based on the knapsack problem. We don’t cover them in this book; some versions have been broken and they are generally suspected to be weaker than systems such as RSA and ElGamal.
A public key cryptosystem is built up of several components. First, there is the set of possible messages (potential plaintexts and ciphertexts). There is also the set of “keys.” These are not exactly the encryption/decryption keys; in RSA, a key is a triple with . For each key , there is an encryption function and a decryption function . Usually, and are assumed to map to , though it would be possible to have variations that allow the plaintexts and ciphertexts to come from different sets. These components must satisfy the following requirements:
and for every and every .
For every and every , the values of and are easy to compute.
For almost every , if someone knows only the function , it is computationally infeasible to find an algorithm to compute .
Given , it is easy to find the functions and .
Requirement (1) says that encryption and decryption cancel each other. Requirement (2) is needed; otherwise, efficient encryption and decryption would not be possible. Because of (4), a user can choose a secret random from and obtain functions and . Requirement (3) is what makes the system public key. Since it is difficult to determine from , it is possible to publish without compromising the security of the system.
Let’s see how RSA satisfies these requirements. The message space can be taken to be all nonnegative integers. As we mentioned previously, a key for RSA is a triple . The encryption function is
where we break into blocks if . The decryption function is
again with broken into blocks if needed. The functions and are immediately determined from knowledge of (requirement (4)) and are easy to compute (requirement (2)). They are inverses of each other since , so (1) is satisfied. If we know , which means we know and , then we have seen that it is (probably) computationally infeasible to determine , hence . Therefore, (3) is (probably) satisfied.
Once a public key system is set up, each user generates a key and determines and . The encryption function is made public, while is kept secret. If there is a problem with impostors, a trusted authority can be used to distribute and verify keys.
In a symmetric system, Bob can be sure that a message that decrypts successfully must have come from Alice (who could really be a group of authorized users) or someone who has Alice’s key. Only Alice has been given the key, so no one else could produce the ciphertext. However, Alice could deny sending the message since Bob could have simply encrypted the message himself. Therefore, authentication is easy (Bob knows that the message came from Alice, if he didn’t forge it himself) but non-repudiation is not (see Section 1.2).
In a public key system, anyone can encrypt a message and send it to Bob, so he will have no idea where it came from. He certainly won’t be able to prove it came from Alice. Therefore, more steps are needed for authentication and non-repudiation. However, these goals are easily accomplished as follows.
Alice starts with her message and computes , where is Alice’s key and is Bob’s key. Then Bob can decrypt using to obtain . He uses the publicly available to obtain . Bob knows that the message must have come from Alice since no one else could have computed . For the same reason, Alice cannot deny sending the message. Of course, all this assumes that most random “messages” are meaningless, so it is unlikely that a random string of symbols decrypts to a meaningful message unless the string was the encryption of something meaningful.
It is possible to use one-way functions with certain properties to construct a public key cryptosystem. Let be an invertible one-way function. This means is easy to compute, but, given , it is computationally infeasible to find the unique value of such that . Now suppose has a trapdoor, which means that there is an easy way to solve for , but only with some extra information known only to the designer of the function. Moreover, it should be computationally infeasible for someone other than the designer of the function to determine this trapdoor information. If there is a very large family of one-way functions with trapdoors, they can be used to form a public key cryptosystem. Each user generates a function from the family in such a way that only that user knows the trapdoor. The user’s function is then published as a public encryption algorithm. When Alice wants to send a message to Bob, she looks up his function and computes . Alice sends to Bob. Since Bob knows the trapdoor for , he can solve and thus find .
In RSA, the functions , for appropriate and , form the family of one-way functions. The secret trapdoor information is the factorization of , or, equivalently, the exponent . In the ElGamal system (Section 10.5), the one-way function is obtained from exponentiation modulo a prime, and the trapdoor information is knowledge of a discrete log. In NTRU (Section 23.4), the trapdoor information is a pair of small polynomials. In the McEliece system (Section 24.10), the trapdoor information is an efficient way for finding the nearest codeword (“error correction”) for certain linear binary codes.
The ciphertext 5859 was obtained from the RSA algorithm using and . Using the factorization , find the plaintext.
Bob sets up a budget RSA cryptosystem. He chooses and and computes . He chooses the encryption exponent to be . Alice sends Bob the ciphertext . What is the plaintext? (You know , , and ).
Suppose your RSA modulus is and your encryption exponent is .
Find the decryption exponent .
Assume that . Show that if is the ciphertext, then the plaintext is . Do not quote the fact that RSA decryption works. That is what you are showing in this specific case.
Bob’s RSA modulus is and his encryption exponent is . Alice sends him the ciphertext . What is the plaintext?
The ciphertext was obtained using RSA with and . You know that the plaintext is either 9 or 10. Determine which it is without factoring .
Alice and Bob are trying to use RSA, but Bob knows only one large prime, namely . He sends and to Alice. She encrypts her message as . Eve intercepts and decrypts using the congruence . What value of should Eve use? Your answer should be an actual number. You may assume that Eve knows and , and she knows that is prime.
Suppose you encrypt messages by computing . How do you decrypt? (That is, you want a decryption exponent such that ; note that 101 is prime.)
Bob knows that if an RSA modulus can be factored, then the system has bad security. Therefore, he chooses a modulus that cannot be factored, namely the prime . He chooses his encryption exponent to be , and he encrypts a message as . The decryption method is to compute for some . Find . (Hint: Fermat’s theorem)
Let be a large prime. Suppose you encrypt a message by computing for some (suitably chosen) encryption exponent . How do you find a decryption exponent such that ?
Bob decides to test his new RSA cryptosystem. He has RSA modulus and encryption exponent . His message is . He sends to himself. Then, just for fun, he also sends to himself. Eve knows and , and intercepts both and . She guesses what Bob has done. How can she find the factorization of ? (Hint: Show that but not mod . What is ?)
Let be the product of two large primes. Alice wants to send a message to Bob, where . Alice and Bob choose integers and relatively prime to . Alice computes and sends to Bob. Bob computes and sends back to Alice. Since Alice knows , she finds such that . Then she computes and sends to Bob. Explain what Bob must now do to obtain , and show that this works. (Remark: In this protocol, the prime factors of do not need to be kept secret. Instead, the security depends on keeping secret. The present protocol is a less efficient version of the three-pass protocol from Section 3.5.)
A bank in Alice Springs (Australia), also known as Alice, wants to send a lot of financial data to the Bank of Baltimore, also known as Bob. They want to use AES, but they do not share a common key. All of their communications will be on public airwaves. Describe how Alice and Bob can accomplish this using RSA.
Naive Nelson uses RSA to receive a single ciphertext , corresponding to the message . His public modulus is and his public encryption exponent is . Since he feels guilty that his system was used only once, he agrees to decrypt any ciphertext that someone sends him, as long as it is not , and return the answer to that person. Evil Eve sends him the ciphertext . Show how this allows Eve to find .
Eve loves to do double encryption. She starts with a message . First, she encrypts it twice with a one-time pad (the same one each time). Then she encrypts the result twice using a Vigenère cipher with key . Finally, she encrypts twice with RSA using modulus and exponent . It happens that . Show that the final result of all this encryption is the original plaintext. Explain your answer fully. Simply saying something like “decryption is the same as encryption” is not enough. You must explain why.
In order to increase security, Bob chooses and two encryption exponents , . He asks Alice to encrypt her message to him by first computing , then encrypting to get . Alice then sends to Bob. Does this double encryption increase security over single encryption? Why or why not?
Eve thinks that she has a great strategy for breaking RSA that uses a modulus that is the product of two 300-digit primes. She decides to make a list of all 300-digit primes and then divide each of them into until she factors . Why won’t this strategy work?
Eve has another strategy. She will make a list of all with , encrypt each , and store them in a database. Suppose Eve has a superfast computer that can encrypt plaintexts per second (this is, of course, well beyond the speed of any existing technology). How many years will it take Eve to compute all encryptions? (There are approximately seconds in a year.)
The exponents and should not be used in RSA. Why?
Alice is trying to factor . She notices that and that . How does she use this information to factor ? Describe the steps but do not actually factor .
Let and be distinct odd primes, and let . Suppose that the integer satisfies .
Show that and .
Use (a) to show that .
Use (b) to show that if then . (This shows that we could work with instead of in RSA. In fact, we could also use the least common multiple of and in place of , by similar reasoning.)
Alice uses RSA with and . Her ciphertext is . Eve notices that .
Show that , where is the plaintext.
Explicitly find an exponent such that . (Hint: You do not need to factor to find . Look at the proof that RSA decryption works. The only property of that is used is that .)
Suppose that there are two users on a network. Let their RSA moduli be and , with not equal to . If you are told that and are not relatively prime, how would you break their systems?
Huey, Dewey, and Louie ask their uncle Donald, “Is prime or composite?” Donald replies, “Yes.” Therefore, they decide to obtain more information on their own.
Huey computes . What does he conclude?
Dewey computes , and he does this by computing
What information can Dewey obtain from his calculation that Huey does not obtain?
Louie notices that . What information can Louie compute? (In parts (b) and (c), you do not need to do the calculations, but you should indicate what calculations need to be done.)
You are trying to factor . Suppose you discover that
and that
Use this information to factor .
Suppose you know that . Use this information to factor .
Suppose you discover that
How would you use this information to factor 2288233? Explain what steps you would do, but do not perform the numerical calculations.
Suppose you want to factor an integer . You have found some integers such that
Describe how you might be able to use this information to factor ? Why might the method fail?
Suppose you have two distinct large primes and . Explain how you can find an integer such that
(Hint: Use the Chinese Remainder Theorem to find four solutions to .)
You are told that
Use this information to factor .
Suppose is a large odd number. You calculate , where is some integer with .
Suppose . Explain why this implies that is not prime.
Suppose . Explain how you can use this information to factor .
Bob is trying to set up an RSA cryptosystem. He chooses and , as usual. By encrypting messages six times, Eve guesses that . If this is the case, what is the decryption exponent ? (That is, give a formula for in terms of the parameters and that allows Eve to compute .)
Bob tries again, with a new and . Alice computes . Eve sees no way to guess the decryption exponent this time. Knowing that if she finds she will have to do modular exponentiation, Eve starts computing successive squares: . She notices that and realizes that this means that . If , what is a value for that will decrypt the ciphertext? Prove that this value works.
Suppose two users Alice and Bob have the same RSA modulus and suppose that their encryption exponents and are relatively prime. Charles wants to send the message to Alice and Bob, so he encrypts to get and . Show how Eve can find if she intercepts and .
Bob finally sets up a very secure RSA system. In fact, it is so secure that he decides to tell Alice one of the prime factors of ; call it . Being no dummy, he does not take the message as , but instead uses a shift cipher to hide the prime in the plaintext , and then does an RSA encryption to obtain . He then sends to Alice. Eve intercepts , and , and she knows that Bob has encrypted this way. Explain how she obtains and quickly. (Hint: How does differ from the result of encrypting the simple plaintext “1” ?)
Suppose Alice uses the RSA method as follows. She starts with a message consisting of several letters, and assigns . She then encrypts each letter separately. For example, if her message is , she calculates , , and . Then she sends the encrypted message to Bob. Explain how Eve can find the message without factoring . In particular, suppose and . Eve intercepts the message
Find the message without factoring 8881.
Let . Bob Square Messages sends and receives only messages such that is a square mod and . It can be shown that for such messages (even though ). Bob chooses and satisfying . Show that if Alice sends Bob a ciphertext (where is a square mod , and ), then Bob can decrypt by computing . Explain your reasoning.
Show that if and , then is a nontrivial factor of .
Bob’s RSA system uses and . Alice encrypts the message and sends the ciphertext to Bob. Unfortunately (for Alice), . Show that Alice’s ciphertext is the same as the plaintext. (Do not factor . Do not compute without using the extra information that . Do not claim that ; it doesn’t.)
Let be the product of two distinct primes.
Let be a multiple of . Show that if , then and .
Suppose is as in part (a), and let be arbitrary (possibly ). Show that and .
Let and be encryption and decryption exponents for RSA with modulus . Show that for all . This shows that we do not need to assume in order to use RSA.
If and are large, why is it likely that for a randomly chosen ?
Alice and Bob are celebrating Pi Day. Alice calculates and Bob calculates .
Use this information to factor . (You should show how to use the information. Do not do the calculation. The answer is not “Put the number in the computer and then relax for 10 minutes.”)
Given that and that and , how would you produce the numbers 23039 and 35118 in part (a)? You do not need to do the calculations, but you should state which congruences are being solved and what theorems are being used.
Now that Eve knows that , she wants to find an such that . Explain how to accomplish this. Say what the method is. Do not do the calculation.
Suppose is the product of three distinct primes. How would an RSA-type scheme work in this case? In particular, what relation would and satisfy?
Note: There does not seem to be any advantage in using three primes instead of two. The running times of some factorization methods depend on the size of the smallest prime factor. Therefore, if three primes are used, the size of must be increased in order to achieve the same level of security as obtained with two primes.
Suppose Bob’s public key is and he has as his encryption exponent. Alice encrypts the message hi eve = . By chance, the message satisfies . If Eve intercepts the ciphertext, how can Eve read the message without factoring ?
Let and . Let . A calculation shows that . Alice decides to encrypt the message using RSA with modulus and exponent . Since she wants the encryption to be very secure, she encrypts the ciphertext, again using and (so she has double encrypted the original plaintext). What is the final ciphertext that she sends? Justify your answer without using a calculator.
You are told that . Use this information to factor . You must use this information and you must give all steps of the computation (that is, give the steps you use if you are doing it completely without a calculator).
Show that if , then .
Show that if is used as an RSA modulus, then the encryption exponent always equals the decryption exponent .
The exponent has sometimes been used for RSA because it makes encryption fast. Suppose that Alice is encrypting two messages, and (for example, these could be two messages that contain a counter variable that increments). Eve does not know but knows that the two plaintexts differ by 1. Let and mod . Show that if Eve knows and , she can recover . (Hint: Compute .)
Suppose that and for some publicly known . Modify the technique of part (a) so that Eve can recover from the ciphertexts and .
Your opponent uses RSA with and encryption exponent and encrypts a message . This yields the ciphertext . A spy tells you that, for this message, . Describe how to determine . Note that you do not know , , , or the secret decryption exponent . However, you should find a decryption exponent that works for this particular ciphertext. Moreover, explain carefully why your decryption works (your explanation must include how the spy’s information is used). For simplicity, assume that .
Show that if then . You may use the fact that .
By part (a), you know that . When checking this result, you compute and . Use this information to find a nontrivial factor of 1729.
Suppose you are using RSA (with modulus and encrypting exponent ), but you decide to restrict your messages to numbers satisfying .
Show that if satisfies , then works as a decryption exponent for these messages.
Assume that both and are congruent to 1 mod 1000. Determine how many messages satisfy . You may assume and use the fact that has 1000 solutions when is a prime congruent to 1 mod 1000.
You may assume the fact that for all with . Let and satisfy , and suppose that is a message such that and . Encrypt as . Show that . Show explicitly how you use the fact that and the fact that . (Note: , so Euler’s theorem does not apply.)
Suppose Bob’s encryption company produces two machines, A and B, both of which are supposed to be implementations of RSA using the same modulus for some unknown primes and . Both machines also use the same encryption exponent . Each machine receives a message and outputs a ciphertext that is supposed to be . Machine A always produces the correct output . However, Machine B, because of implementation and hardware errors, always outputs a ciphertext such that and . How could you use machines A and B to find and ? (See Computer Problem 11 for a discussion of how such a situation could arise.) (Hint: but not mod . What is ?)
Alice and Bob play the following game. They choose a large odd integer and write with odd. Alice then chooses a random integer with . Bob computes . Then Alice computes . Then Bob computes , Alice computes , etc. They stop if someone gets , and the person who gets wins.
Show that if is prime, the game eventually stops.
Suppose is the product of two distinct primes and Alice knows this factorization. Show how Alice can choose so that she wins on her first play. That is, but .
Suppose Alice wants to send a short message but wants to prevent the short message attack of Section 9.2. She tells Bob that she is adjoining 100 zeros at the end of her plaintext, so she is using as the plaintext and sending . If Eve knows that Alice is doing this, how can Eve modify the short plaintext attack and possibly find the plaintext?
Suppose Alice realizes that the method of part (a) does not provide security, so instead she makes the plaintext longer by repeating it two times: (where means we write the digits of followed by the digits of to obtain a longer number). If Eve knows that Alice is doing this, how can Eve modify the short plaintext attack and possibly find the plaintext? Assume that Eve knows the length of . (Hint: Express as a multiple of .)
This exercise provides some of the details of how the quadratic sieve obtains the relations that are used to factor a large odd integer . Let be the smallest integer greater than the square root of and let . Let the factor base consist of the primes up to some bound . We want to find squares that are congruent mod to a product of primes in . One way to do this is to find values of that are products of primes in . We’ll search over a range , for some .
Suppose . Show that , so is simply . (Hint: Show that .) Henceforth, we’ll assume that , so the values of that we consider have .
Let be a prime in . Show that if there exists an integer with divisible by , then is a square mod . This shows that we may discard those primes in for which is not a square mod . Henceforth, we will assume that such primes have been discarded.
Let be such that is a square mod . Show that if is odd, and , then there are exactly two values of mod such that . Call these values and . (Note: In the unlikely case that , we have found a factor, which was the goal.)
For each with , initialize a register with value For each prime , subtract from the registers of those with . (Remark: This is the “sieving” part of the quadratic sieve.) Show that if (with ) is a product of distinct primes in , then the register for becomes 0 at the end of this process.
Explain why it is likely that if (with ) is a product of (possibly nondistinct) primes in , then the final result for the register for is small (compared to the register for an such that has a prime factor not in ).
Why is the procedure of part (d) faster than trial division of each by each element of , and why does the algorithm subtract rather than dividing by ?
In practice, the sieve also takes into account solutions to mod some powers of small primes in . After the sieving process is complete, the registers with small entries are checked to see which correspond to being a product of primes from . These give the relations “square product of primes in mod ” that are used to factor .
Bob chooses to be the product of two large primes such that .
Show that (this is true for any positive ).
Let . Show that (this is true for any positive ).
Alice has message represented as . She encrypts by first choosing a random integer with . She then computes
Bob decrypts by computing . Show that . Therefore, Bob can recover the message by computing and then dividing by .
Let and denote encryption and decryption via the method in parts (c) and (d). Show that
Note: The encryptions probably use different values of the random number , so there is more than one possible encryption of a message . Part (e) says that, no matter what choices are made for , the decryption of is the same as the decryption of .
The preceding is called the Paillier cryptosystem. (Equation 9.1) says that is is possible to do addition on the encrypted messages without knowing the messages. For many years, a goal was to design a cryptosystem where both addition and multiplication could be done on the encrypted messages. This property is called homomorphic encryption, and the first such system was designed by Gentry in 2009. Current research aims at designing systems that can be used in practice.
One possible application of the Paillier cryptosystem from the previous exercise is to electronic voting (but, as we’ll see, modifications are needed in order to make it secure). Bob, who is the trusted authority, sets up the system. Each voter uses for NO and for YES. The voters encrypt their s and send the ciphertexts to Bob.
How does Bob determine how many YES and how many NO votes without decrypting the individual votes?
Suppose an overzealous and not very honest voter wants to increase the number of YES votes. How is this accomplished?
Suppose someone else wants to increase the number of NO votes. How can this be done?
Here is a 3-person encryption scheme based on the same principles as RSA. A trusted entity chooses two large distinct primes and and computes , then chooses three integers with . Alice, Bob, and Carla are given the following keys:
Alice has a message that she wants to send to both Bob and Carla. How can Alice encrypt the message so that both of them can read decrypt it?
Alice has a message that she wants to send only to Carla. How can Alice encrypt the message so that Carla can decrypt it but Bob cannot decrypt it?
Paul Revere’s friend in a tower at MIT says he’ll send the message one if (the British are coming) by land and two if by sea. Since they know that RSA will be invented in the Boston area, they decide that the message should be encrypted using RSA with and . Paul Revere receives the ciphertext 273095689186. What was the plaintext? Answer this without factoring .
What could Paul Revere’s friend have done so that we couldn’t guess which message was encrypted? (See the end of Subsection 9.2.2.)
In an RSA cryptosystem, suppose you know , , and . Factor using the method of Subsection 9.4.2.
Choose two 30-digit primes and and an encryption exponent . Encrypt each of the plaintexts cat, bat, hat, encyclopedia, antidisestablishmentarianism. Can you tell from looking at the ciphertexts that the first three plaintexts differ in only one letter or that the last two plaintexts are much longer than the first three?
Factor 618240007109027021 by the method.
Factor 8834884587090814646372459890377418962766907 by the method. (The number is stored in the downloadable computer files ( bit.ly/2JbcS6p) as n1.)
Let . Suppose you know that
Factor .
Let . Find and with but .
Suppose you know that
Use this information to factor 670726081.
Suppose you know that . Why won’t this information help you to factor 670726081?
Suppose you know that
How would you use this information to factor 3837523? Note that the exponent 1916460 is twice the exponent 958230.
Alice and Bob have the same RSA modulus , given to them by some central authority (who does not tell them the factorization of ). Alice has encryption and decryption exponents and , and Bob has and . As usual, and are public and and are private.
Suppose the primes and used in the RSA algorithm are consecutive primes. How would you factor ?
The ciphertext 10787770728 was encrypted using and . The factors and of were chosen so that . Decrypt the message.
The following ciphertext was encrypted mod using the exponent :
The prime factors and of are consecutive primes. Decrypt the message. (The number is stored in the downloadable computer files (bit.ly/2JbcS6p) as naive, and is stored as cnaive.) Note: In Mathematica®, the command Round[N[Sqrt[n],50]] evaluates the square root of to 50 decimal places and then rounds to the nearest integer. In Maple, first use the command Digits:=50 to obtain 50-digit accuracy, then use the command round(sqrt(n*1.)) to change to a decimal number, take its square root, and round to the nearest integer. In MATLAB, use the command digits(50);round(vpa(sqrt(ʼn’))).
Let , , and . Let the message be .
Compute and ; then use the Chinese remainder theorem to combine these to get .
Change one digit of (for example, this could be caused by some radiation). Now combine this with to get an incorrect value for . Compute . Why does this factor ?
The method of (a) for computing is attractive since it does not require as large multiprecision arithmetic as working directly mod . However, as part (b) shows, if an attacker can cause an occasional bit to fail, then can be factored.
Suppose that , , and . The ciphertext is transmitted, but an error occurs during transmission. The received ciphertext is 2304329328016936947195. The receiver is able to determine that the digits received are correct but that last digit is missing. Determine the missing digit and decrypt the message.
Test 38200901201 for primality using the Miller-Rabin test with . Then test using . Note that the first test says that 38200901201 is probably prime, while the second test says that it is composite. A composite number such as 38200901201 that passes the Miller-Rabin test for a number is called a strong -pseudoprime.
There are three users with pairwise relatively prime moduli . Suppose that their encryption exponents are all . The same message is sent to each of them and you intercept the ciphertexts for .
Show that .
Show how to use the Chinese remainder theorem to find (as an exact integer, not only as ) and therefore . Do this without factoring.
Suppose that
and the corresponding ciphertexts are
These were all encrypted using . Find the message.
Choose a 10-digit prime and and 11-digit prime . Form .
Let the encryption exponent be . Write a program that computes the RSA encryptions of all plaintexts with . (Do not store or display the results.) The computer probably did this almost instantaneously.
Modify your program in (b) so that it computes the encryptions of all with and time how long this takes (if this takes too long, use ; if it’s too fast to time, use ).
Using your timing from (c), estimate how long it will take to encrypt all with (a year is approximately seconds).
Even this small example shows that it is impractical to make a database of all encryptions in order to attack RSA.
In the RSA algorithm, we saw how the difficulty of factoring yields useful cryptosystems. There is another number theory problem, namely discrete logarithms, that has similar applications.
Fix a prime . Let and be nonzero integers mod and suppose
The problem of finding is called the discrete logarithm problem. If is the smallest positive integer such that , we may assume , and then we denote
and call it the discrete log of with respect to (the prime is omitted from the notation).
For example, let and let . Since , we have . Of course, , so we could consider taking any one of 6, 16, 26 as the discrete logarithm. But we fix the value by taking the smallest nonnegative value, namely 6. Note that we could have defined the discrete logarithm in this case to be the congruence class 6 mod 10. In some ways, this would be more natural, but there are applications where it is convenient to have a number, not just a congruence class.
Often, is taken to be a primitive root mod , which means that every is a power of . If is not a primitive root, then the discrete logarithm will not be defined for certain values of .
Given a prime , it is fairly easy to find a primitive root in many cases. See Exercise 54 in Chapter 3.
The discrete log behaves in many ways like the usual logarithm. In particular, if is a primitive root mod , then
(see Exercise 6).
When is small, it is easy to compute discrete logs by exhaustive search through all possible exponents. However, when is large this is not feasible. We give some ways of attacking discrete log problems later. However, it is believed that discrete logs are hard to compute in general. This assumption is the basis of several cryptosystems.
The size of the largest primes for which discrete logs can be computed has usually been approximately the same size as the size of largest integers that could be factored (both of these refer to computations that would work for arbitrary numbers of these sizes; special choices of integers will succumb to special techniques, and thus discrete log computations and factorizations work for much larger specially chosen numbers). Compare Table 10.1 with Table 9.2 in Chapter 9.
A function is called a one-way function if is easy to compute, but, given , it is computationally infeasible to find with . Modular exponentiation is probably an example of such a function. It is easy to compute , but solving for is probably hard. Multiplication of large primes can also be regarded as a (probable) one-way function: It is easy to multiply primes but difficult to factor the result to recover the primes. One-way functions have many cryptographic uses.
In this section, we present some methods for computing discrete logarithms. A method based on the birthday attack is discussed in Subsection 12.1.1.
For simplicity, take to be a primitive root mod , so is the smallest positive exponent such that . This implies that
Assume that
We want to find .
First, it’s easy to determine . Note that
so (see Exercise 15 in Chapter 3). However, is assumed to be the smallest exponent to yield , so we must have
Starting with , raise both sides to the power to obtain
Therefore, if , then is even; otherwise, is odd.
Suppose we want to solve . Since
we must have even. In fact, , as we saw previously.
The preceding idea was extended by Pohlig and Hellman to give an algorithm to compute discrete logs when has only small prime factors. Suppose
is the factorization of into primes. Let be one of the factors. We’ll compute . If this can be done for each , the answers can be recombined using the Chinese remainder theorem to find the discrete logarithm.
Write
We’ll determine the coefficients successively, and thus obtain . Note that
where is an integer. Starting with , raise both sides to the power to obtain
The last congruence is a consequence of Fermat’s theorem: . To find , simply look at the powers
until one of them yields . Then . Note that since , and since the exponents are distinct mod , there is a unique that yields the answer.
An extension of this idea yields the remaining coefficients. Assume that . Let
Raise both sides to the power to obtain
The last congruence follows by applying Fermat’s theorem. We couldn’t calculate as since fractional exponents cause problems. Note that every exponent we have used is an integer.
To find , simply look at the powers
until one of them yields . Then .
If , let and raise both sides to the power to obtain . In this way, we can continue until we find that doesn’t divide . Since we cannot use fractional exponents, we must stop. But we have determined , so we know .
Repeat the procedure for all the prime factors of . This yields mod for all . The Chinese remainder theorem allows us to combine these into a congruence for mod . Since , this determines .
Let , , and . We want to solve
Note that
First, let and let’s find . Write .
To start,
and
Since
we have . Next,
Also,
Since
we have . Continuing, we have
and
Therefore, . We have obtained
Now, let and let’s find mod 5. We have
and
Trying the possible values of yields
Therefore, gives the desired answer, so .
Since and , we combine these to obtain , so . A quick calculation checks that , as desired.
As long as the primes involved in the preceding algorithm are reasonably small, the calculations can be done quickly. However, when is large, calculating the numbers for becomes infeasible, so the algorithm no longer is practical. This means that if we want a discrete logarithm to be hard, we should make sure that has a large prime factor.
Note that even if has a large prime factor , the algorithm can determine discrete logs mod if is composed of small prime factors. For this reason, often is chosen to be a power of . Then the discrete log is automatically 0 mod , so the discrete log hides only mod information, which the algorithm cannot find. If the discrete log represents a secret (or better, times a secret), this means that an attacker does not obtain partial information by determining mod , since there is no information hidden this way. This idea is used in the Digital Signature Algorithm, which we discuss in Chapter 13.
Eve wants to find such that . She does the following. First, she chooses an integer with , for example (where means round up to the nearest integer). Then she makes two lists:
for
for
She looks for a match between the two lists. If she finds one, then
so . Therefore, solves the discrete log problem.
Why should there be a match? Since , we can write in base as with . In fact, and . Therefore,
gives the desired match.
The list for is the set of “Baby Steps” since the elements of the list are obtained by multiplying by , while the “Giant Steps” are obtained in the second list by multiplying by . It is, of course, not necessary to compute all of the second list. Each element, as it is computed, can be compared with the first list. As soon as a match is found, the computation stops.
The number of steps in this algorithm is proportional to and it requires storing approximately numbers. Therefore, the method works for primes up to , or even slightly larger, but is impractical for very large .
For an example, see Example 35 in the Computer Appendices.
The idea is similar to the method of factoring in Subsection 9.4.1 . Again, we are trying to solve , where is a large prime and is a primitive root.
First, there is a precomputation step. Let be a bound and let be the primes less than . This set of primes is called our factor base. Compute for several values of . For each such number, try to write it as a product of the primes less than . If this is not the case, discard . However, if , then
When we obtain enough such relations, we can solve for for each .
Now, for random integers , compute . For each such number, try to write it as a product of primes less than . If we succeed, we have , which means
This algorithm is effective if is of moderate size. This means that should be chosen to have at least 200 digits, maybe more, if the discrete log problem is to be hard.
Let and . Let , so we are working with the primes 2,3,5,7. A calculation yields the following:
Therefore,
The second congruence yields . Substituting this into the third congruence yields . The fourth congruence yields only the value of since . This gives two choices for . Of course, we could try them and see which works. Or we could use the fifth congruence to obtain . This finishes the precomputation step.
Suppose now that we want to find . Trying a few randomly chosen exponents yields , so
Therefore, .
Of course, once the precomputation has been done, it can be reused for computing several discrete logs for the same prime .
When , the Pohlig-Hellman algorithm computes discrete logs mod 4 quite quickly. What happens when ? The Pohlig-Hellman algorithm won’t work, since it would require us to raise numbers to the power, which would yield the ambiguity of a fractional exponent. The surprising fact is that if we have an algorithm that quickly computes discrete logs mod 4 for a prime , then we can use it to compute discrete logs mod quickly. Therefore, it is unlikely that such an algorithm exists.
There is a philosophical reason that we should not expect such an algorithm. A natural point of view is that the discrete log should be regarded as a number mod . Therefore, we should be able to obtain information on the discrete log only modulo the power of 2 that appears in . When , this means that asking questions about discrete logs mod 4 is somewhat unnatural. The question is possible only because we normalized the discrete log to be an integer between 0 and . For example, . We defined to be 6 in this case; if we had allowed it also to be 16, we would have two values for , namely 6 and 16, that are not congruent mod 4. Therefore, from this point of view, we shouldn’t even be asking about .
We need the following lemma, which is similar to the method for computing square roots mod a prime (see Section 3.9).
Let be prime, let , and let be an integer. Suppose and are two nonzero numbers mod such that . Then
Proof.
The final congruence is because of Fermat’s theorem.
Fix the prime and let be a primitive root. Assume we have a machine that, given an input , gives the output . As we saw previously, it is easy to compute . So the new information supplied by the machine is really only the second bit of the discrete log.
Now assume . Let be the binary expansion of . Using the machine, we determine and . Suppose we have determined with . Let
Using the lemma times, we find
Applying the machine to this equation yields the value of . Proceeding inductively, we obtain all the values . This determines , as desired.
It is possible to make this algorithm more efficient. See, for example, [Stinson1, page 175].
In conclusion, if we believe that finding discrete logs for is hard, then so is computing such discrete logs mod 4.
Alice claims that she has a method to predict the outcome of football games. She wants to sell her method to Bob. Bob asks her to prove her method works by predicting the results of the games that will be played this weekend. “No way,” says Alice. “Then you will simply make your bets and not pay me. If you want me to prove my system works, why don’t I show you my predictions for last week’s games?” Clearly there is a problem here. We’ll show how to resolve it.
Here’s the setup. Alice wants to send a bit , which is either 0 or 1, to Bob. There are two requirements.
Bob cannot determine the value of the bit without Alice’s help.
Alice cannot change the bit once she sends it.
One way is for Alice to put the bit in a box, put her lock on it, and send it to Bob. When Bob wants the value of the bit, Alice removes the lock and Bob opens the box. We want to implement this mathematically in such a way that Alice and Bob do not have to be in the same room when the bit is revealed.
Here is a solution. Alice and Bob agree on a large prime and a primitive root . Alice chooses a random number whose second bit is . She sends to Bob. We assume that Bob cannot compute discrete logs for . As pointed out in the last section, this means that he cannot compute discrete logs mod 4. In particular, he cannot determine the value of . When Bob wants to know the value of , Alice sends him the full value of , and by looking at , he finds . Alice cannot send a value of different than the one already used, since Bob checks that , and this equation has a unique solution .
Back to football: For each game, Alice sends if she predicts the home team will win, if she predicts it will lose. After the game has been played, Alice reveals the bit to Bob, who can see whether her predictions were correct. In this way, Bob cannot profit from the information by receiving it before the game, and Alice cannot change her predictions once the game has been played.
Bit commitment can also be accomplished with many other one-way functions. For example, Alice can take a random 100-bit string, followed by the bit , followed by another 100-bit string. She applies the one-way function to this string and sends the result to Bob. After the game, she sends the full 201-bit string to Bob, who applies the one-way function and compares with what Alice originally sent.
An important problem in cryptography is how to establish keys for use in cryptographic protocols such as DES or AES, especially when the two parties are widely separated. Public key methods such as RSA provide one solution. In the present section, we describe a different method, due to Diffie and Hellman, whose security is very closely related to the difficulty of computing discrete logarithms.
There are several technical implementation issues related to any key distribution scheme. Some of these are discussed in Chapter 15. In the present section, we restrict ourselves to the basic Diffie-Hellman algorithm. For more discussion of some security concerns about implementations of the Diffie-Hellman protocol, see [Adrian et al.].
Here is how Alice and Bob establish a private key . All of their communications in the following algorithm are over public channels.
Either Alice or Bob selects a large prime number for which the discrete logarithm problem is hard and a primitive root . Both and can be made public.
Alice chooses a secret random with , and Bob selects a secret random with .
Alice sends to Bob, and Bob sends to Alice.
Using the messages that they each have received, they can each calculate the session key . Alice calculates by , and Bob calculates by .
There is no reason that Alice and Bob need to use all of as their key for their communications. Now that they have the same number , they can use some prearranged procedure to produce a key. For example, they could use the middle 56 bits of to obtain a DES key.
Suppose Eve listens to all the communications between Alice and Bob. She will know and . If she can compute discrete logs, then she can find the discrete log of to obtain . Then she raises to the power to obtain . Once Eve has , she can use the same procedure as Alice and Bob to extract a communication key. Therefore, if Eve can compute discrete logs, she can break the system.
However, Eve does not necessarily need to compute or to find . What she needs to do is solve the following:
Computational Diffie-Hellman Problem: Let be prime and let be a primitive root mod . Given and , find .
It is not known whether or not this problem is easier than computing discrete logs. The reasoning above shows that it is no harder than computing discrete logs. A related problem is the following:
Decision Diffie-Hellman Problem: Let be prime and let be a primitive root mod . Given and , and , decide whether or not .
In other words, if Eve claims that she has found with , and offers to sell you this information, can you decide whether or not she is telling the truth? Of course, if you can solve the computational Diffie-Hellman problem, then you simply compute and check whether it is (and then you can ignore Eve’s offer).
Conversely, does a method for solving the decision Diffie-Hellman problem yield a solution to the computational Diffie-Hellman problem? This is not known at present. One obvious method is to choose many values of and check each value until one equals . But this brute force method takes at least as long as computing discrete logarithms by brute force, so is impractical. There are situations involving elliptic curves, analogous to the present setup, where a fast solution is known for the decision Diffie-Hellman problem but no practical solution is known for the computational Diffie-Hellman problem (see Exercise 8 in Chapter 22).
In Chapter 9, we studied a public key cryptosystem whose security is based on the difficulty of factoring. It is also possible to design a system whose security relies on the difficulty of computing discrete logarithms. This was done by ElGamal in 1985. This system does not quite fit the definition of a public key cryptosystem given at the end of Chapter 9, since the set of possible plaintexts (integers mod ) is not the same as the set of possible ciphertexts (pairs of integers mod ). However, this technical point will not concern us.
Alice wants to send a message to Bob. Bob chooses a large prime and a primitive root . Assume is an integer with . If is larger, break it into smaller blocks. Bob also chooses a secret integer and computes . The information is made public and is Bob’s public key. Alice does the following:
Downloads
Chooses a secret random integer and computes
Computes
Sends the pair to Bob
Bob decrypts by computing
This works because
If Eve determines , then she can also decrypt by the same procedure that Bob uses. Therefore, it is important for Bob to keep secret. The numbers and are public, and . The difficulty of computing discrete logs is what keeps secure.
Since is a random integer, is a random nonzero integer mod . Therefore, is multiplied by a random integer, and is random mod (unless , which should be avoided, of course). Therefore, gives Eve no information about . Knowing does not seem to give Eve enough additional information.
The integer is difficult to determine from , since this is again a discrete logarithm problem. However, if Eve finds , she can then calculate , which is .
It is important that a different random be used for each message. Suppose Alice encrypts messages and for Bob and uses the same value for each message. Then will be the same for both messages, so the ciphertexts will be and . If Eve finds out the plaintext , she can also determine , as follows. Note that
Since Eve knows and , she computes .
In Chapter 21, we’ll meet an analog of the ElGamal method that uses elliptic curves.
Suppose Eve claims to have obtained the plaintext corresponding to an RSA ciphertext . It is easy to verify her claim: Compute and check whether this equal . Now suppose instead that Eve claims to possess the message corresponding to an ElGamal encryption . Can you verify her claim? It turns out that this is as hard as the decision Diffie-Hellman problem from Section 10.4. In this aspect, the ElGamal algorithm is therefore much different than the RSA algorithm (of course, if some randomness is added to an RSA plaintext through OAEP, for example, then RSA encryption has a similar property).
A machine that solves Decision Diffie-Hellman problems mod can be used to decide the validity of mod ElGamal ciphertexts, and a machine that decides the validity of mod ElGamal ciphertexts can be used to solve Decision Diffie-Hellman problems mod .
Proof. Suppose first that you have a machine that can decide whether an ElGamal decryption is correct. In other words, when given the inputs , the machine outputs “yes” if is the decryption of and outputs “no” otherwise. Let’s use this machine to solve the decision Diffie-Hellman problem. Suppose you are given and , and you want to decide whether or not . Let and . Moreover, let and . Input
into . Note that in the present setup, is the secret integer and takes the place of the . The correct decryption of is . Therefore, outputs “yes” exactly when is the same as , namely when . This solves the decision Diffie-Hellman problem.
Conversely, suppose you have a machine that can solve the decision Diffie-Hellman problem. This means that if you give inputs , then outputs “yes” if and outputs “no” if not. Let be the claimed decryption of the ElGamal ciphertext . Input as , so , and input as so . Input as . Note that is the correct plaintext for the ciphertext if and only if , which happens if and only if . Therefore, is the correct plaintext if and only if is the solution to the Diffie-Hellman problem. Therefore, with these inputs, outputs “yes” exactly when is the correct plaintext.
The reasoning just used can also be used to show that solving the computational Diffie-Hellman problem is equivalent to breaking the ElGamal system:
A machine that solves computational Diffie-Hellman problems mod can be used to decrypt mod ElGamal ciphertexts, and a machine that decrypts mod ElGamal ciphertexts can be used to solve computational Diffie-Hellman problems mod .
Proof. If we have a machine that can decrypt all ElGamal ciphertexts, then input (so ) and . Take any nonzero value for . Then outputs . Therefore, yields the solution to the computational Diffie-Hellman problem.
Conversely, suppose we have a machine that can solve computational Diffie-Hellman problems. If we have an ElGamal ciphertext , then we input and . Then outputs . Since , we obtain the plaintext .
Let . Compute .
Show that .
Let . Compute .
Show that .
Compute .
Let . Then is a primitive root. Suppose . Without finding the value of , determine whether is even or odd.
Let . Then 2 is a primitive root. Use the Pohlig-Hellman method to compute .
It can be shown that 5 is a primitive root for the prime 1223. You want to solve the discrete logarithm problem . Given that , determine whether is even or odd.
Let be a primitive root mod . Show that
(Hint: You need the proposition in Section 3.7.)
More generally, let be arbitrary. Show that
where is defined in Exercise 53 in Chapter 3.
Let , so is a primitive root. It can be shown that and .
Using the fact that , evaluate .
Using the fact that , evaluate .
The number 12347 is prime. Suppose Eve discovers that . Find an integer with such that .
Suppose you know that
Find a value of with such that .
Let be a large prime and suppose . Suppose for some integer .
Explain why we may assume that .
Describe a BabyStep, Giant Step method to find . (Hint: One list can contain numbers of the form .)
Suppose you have a random 500-digit prime . Suppose some people want to store passwords, written as numbers. If is the password, then the number is stored in a file. When is given as a password, the number is compared with the entry for the user in the file. Suppose someone gains access to the file. Why is it hard to deduce the passwords?
Suppose is instead chosen to be a five-digit prime. Why would the system in part (a) not be secure?
Let’s reconsider Exercise 55 in Chapter 3 from the point of view of the Pohlig-Hellman algorithm. The only prime is 2. For as in that exercise, write .
Show that the Pohlig-Hellman algorithm yields
and
Use the Pohlig-Hellman algorithm to compute .
In the Diffie-Hellman Key Exchange protocol, suppose the prime is and the primitive root is . Alice’s secret is and Bob’s secret is . Describe what Alice and Bob send each other and determine the shared secret that they obtain.
In the Diffie-Hellman Key Exchange protocol, Alice thinks she can trick Eve by choosing her secret to be . How will Eve recognize that Alice made this choice?
In the Diffie-Hellman key exchange protocol, Alice and Bob choose a primitive root for a large prime . Alice sends to Bob, and Bob sends to Alice. Suppose Eve bribes Bob to tell her the values of and . However, he neglects to tell her the value of . Suppose . Show how Eve can determine from the knowledge of , and .
In the ElGamal cryptosystem, Alice and Bob use and . Bob chooses his secret to be , so . Alice sends the ciphertext . Determine the plaintext .
Consider the following Baby Step, Giant Step attack on RSA, with public modulus . Eve knows a plaintext and a ciphertext with . She chooses and makes two lists: The first is for . The second is for .
Why is there always a match between the two lists, and how does a match allow Eve to find the decryption exponent ?
Your answer to (a) is probably partly false. What you have really found is an exponent such that . Give an example of a plaintext–ciphertext pair where the you find is not the encryption exponent. (However, usually is very close to being the correct decryption exponent.)
Why is this not a useful attack on RSA? (Hint: How long are the lists compared to the time needed to factor by trial division?)
Alice and Bob are using the ElGamal public key cryptosystem, but have set it up so that only Alice, Bob, and a few close associates (not Eve) know Bob’s public key. Suppose Alice is sending the message dismiss Eve to Bob, but Eve intercepts the message and prevents Bob from receiving it. How can Eve change the message to promote Eve before sending it to Bob?
Let . Verify that .
Let . Evaluate .
Let . Then 2 is a primitive root mod .
Show that and .
Compute . (Note: The answer should be less than 3988.)
Let .
Show that .
Use method of Exercise 54 in Chapter 3 plus the result of part (a) to show that 11 is a primitive root mod 1201.
Use the Pohlig-Hellman algorithm to find .
Use the Baby Step, Giant Step method to find .
A basic component of many cryptographic algorithms is what is known as a hash function. When a hash function satisfies certain non-invertibility properties, it can be used to make many algorithms more efficient. In the following, we discuss the basic properties of hash functions and attacks on them. We also briefly discuss the random oracle model, which is a method of analyzing the security of algorithms that use hash functions. Later, in Chapter 13, hash functions will be used in digital signature algorithms. They also play a role in security protocols in Chapter 15, and in several other situations.
A cryptographic hash function takes as input a message of arbitrary length and produces as output a message digest of fixed length, for example, 256 bits as depicted in Figure 11.1. Certain properties should be satisfied:
Given a message , the message digest can be calculated very quickly.
Given a , it is computationally infeasible to find an with (in other words, is a one-way, or preimage resistant, function). Note that if is the message digest of some message, we are not trying to find this message. We are only looking for some with .
It is computationally infeasible to find messages and with (in this case, the function is said to be strongly collision resistant).
Note that since the set of possible messages is much larger than the set of possible message digests, there should always be many examples of messages and with . The requirement (3) says that it should be hard to find examples. In particular, if Bob produces a message and its hash , Alice wants to be reasonably certain that Bob does not know another message with , even if both and are allowed to be random strings of symbols.
Preimage resistance and collision resistance are closely related, but we list them separately because they are used in slightly different circumstances. The following argument shows that, for our hash functions, collision resistance implies preimage resistance: Suppose is not preimage resistant. Take a random and compute . If is not preimage resistant, we can quickly find with . Because is many-to-one, it is likely that , so we have a collision, contradicting the collision resistance of . However, there are examples that show that for arbitrary functions, collision resistance does not imply preimage resistance. See Exercise 12.
In practice, it is sometimes sufficient to weaken (3) to require to be weakly collision resistant. This means that given , it is computationally infeasible to find with . This property is also called second preimage resistance.
Requirement (3) is the hardest one to satisfy. In fact, in 2004, Wang, Feng, Lai, and Yu (see [Wang et al.]) found many examples of collisions for the popular hash functions MD4, MD5, HAVAL-128, and RIPEMD. The MD5 collisions have been used by Ondrej Mikle [Mikle] to create two different and meaningful documents with the same hash, and the paper [Lenstra et al.] shows how to produce examples of X.509 certificates (see Section 15.5) with the same MD5 hash (see also Exercise 15). This means that a valid digital signature (see Chapter 13) on one certificate is also valid for the other certificate, hence it is impossible for someone to determine which is the certificate that was legitimately signed by a Certification Authority. It has been reported that weaknesses in MD5 were part of the design of the Flame malware, which attacked several computers in the Middle East, including Iran’s oil industry, from 2010 to 2012.
In 2005, Wang, Yin, and Yu [Wang et al. 2] predicted that collisions could be found for the hash function SHA-1 with around calculations, which is much better than the expected calculations required by the birthday attack (see Section 12.1). In addition, they found collisions in a smaller 60-round version of SHA-1. These weaknesses were a cause for concern for using these hash algorithms and led to research into replacements. Finally, in 2017, a joint project between CWI Amsterdam and Google Research found collisions for SHA-1 [Stevens et al.]. Although SHA-1 is still common, it is starting to be used less and less.
One of the main uses of hash functions is in digital signatures. Since the length of a digital signature is often at least as long as the document being signed, it is much more efficient to sign the hash of a document rather than the full document. This will be discussed in Chapter 13.
Hash functions may also be employed as a check on data integrity. The question of data integrity comes up in basically two scenarios. The first is when the data (encrypted or not) are being transmitted to another person and a noisy communication channel introduces errors to the data. The second occurs when an observer rearranges the transmission in some manner before it gets to the receiver. Either way, the data have become corrupted.
For example, suppose Alice sends Bob long messages about financial transactions with Eve and encrypts them in blocks. Perhaps Eve deduces that the tenth block of each message lists the amount of money that is to be deposited to Eve’s account. She could easily substitute the tenth block from one message into another and increase the deposit.
In another situation, Alice might send Bob a message consisting of several blocks of data, but one of the blocks is lost during transmission. Bob might never realize that the block is missing.
Here is how hash functions can be used. Say we send over the communications channel and it is received as . To check whether errors might have occurred, the recipient computes and sees whether it equals . If any errors occurred, it is likely that , because of the collision-resistance properties of .
Let be a large integer. Let be regarded as an integer between 0 and . This function clearly satisfies (1). However, (2) and (3) fail: Given , let . Then . So is not one-way. Similarly, choose any two values and that are congruent mod . Then , so is not strongly collision resistant.
The following example, sometimes called the discrete log hash function, is due to Chaum, van Heijst, and Pfitzmann [Chaum et al.]. It satisfies (2) and (3) but is much too slow to be used in practice. However, it demonstrates the basic idea of a hash function.
First we select a large prime number such that is also prime (see Exercise 15 in Chapter 13). We now choose two primitive roots and for . Since is a primitive root, there exists such that . However, we assume that is not known (finding , if not given it in advance, involves solving a discrete log problem, which we assume is hard).
The hash function will map integers mod to integers mod . Therefore, the message digest usually contains approximately half as many bits as the message. This is not as drastic a reduction in size as is usually required in practice, but it suffices for our purposes.
Write with . Then define
The following shows that the function is probably strongly collision resistant.
If we know messages with , then we can determine the discrete logarithm .
Proof
Write and . Suppose
Using the fact that , we rewrite this as
Since is a primitive root mod , we know that if and only if . In our case, this means that
Let . There are exactly solutions to the preceding congruence (see Subsection 3.3.1), and they can be found quickly. By the choice of , the only factors of are . Since , it follows that . Therefore, if , then it is a nonzero multiple of of absolute value less than . This means that , so or 2. Therefore, there are at most two possibilities for . Calculate for each possibility; only one of them will yield . Therefore, we obtain , as desired.
On the other hand, if , then the preceding yields . Since , we must have . Therefore, , contrary to our assumption.
It is now easy to show that is preimage resistant. Suppose we have an algorithm that starts with a message digest and quickly finds an with . In this case, it is easy to find with : Choose a random and compute , then compute . Since maps messages to message digests, there are many messages with . It is therefore not very likely that . If it is, try another random . Soon, we should find a collision, that is, messages with . The preceding proposition shows that we can then solve a discrete log problem. Therefore, it is unlikely that such an algorithm exists.
As we mentioned earlier, this hash function is good for illustrative purposes but is impractical because of its slow nature. Although it can be computed efficiently via repeated squaring, it turns out that even repeated squaring is too slow for practical applications. In applications such as electronic commerce, the extra time required to perform the multiplications in software is prohibitive.
There are many families of hash functions. The discrete log hash function that we described in the previous section is too slow to be of practical use. One reason is that it employs modular exponentiation, which makes its computational requirements about the same as RSA or ElGamal. Even though modular exponentiation is fast, it is not fast enough for the massive inputs that are used in some situations. The hash functions described in this section and the next are easily seen to involve only very basic operations on bits and therefore can be carried out much faster than procedures such as modular exponentiation.
We now describe the basic idea behind many cryptographic hash functions by giving a simple hash function that shares many of the basic properties of hash functions that are used in practice. This hash function is not an industrial-strength hash function and should never be used in any system.
Suppose we start with a message of arbitrary length . We may break into -bit blocks, where is much smaller than . We denote these -bit blocks by , and thus represent . Here , and the last block is padded with zeros to ensure that it has bits.
We write the th block as a row vector
where each is a bit.
Now, we may stack these row vectors to form an array. Our hash will have bits, where we calculate the th bit as the XOR along the th column of the matrix, that is . We may visualize this as
This hash function is able to take an arbitrary length message and output an -bit message digest. It is not considered cryptographically secure, though, since it is easy to find two messages that hash to the same value (Exercise 9).
Practical cryptographic hash functions typically make use of several other bit-level operations in order to make it more difficult to find collisions. Section 11.4 contains many examples of such operations.
One operation that is often used is bit rotation. We define the right rotation operation
as the result of shifting to the right by positions and wrapping the rightmost bits around, placing them in leftmost bit locations. Then gives a similar rotation of by places to the left.
We may modify our simple hash function above by requiring that block is left rotated by , to produce a new block . We may now arrange the in columns and define a new, simple hash function by XORing these columns. Thus, we get
This new hash function involving rotations mixes the bits in one position with those in another, but it is still easy to find collisions (Exercise 9). Building a cryptographic hash requires considerably more tricks than just rotating. In later sections, we describe hash functions that are used in practice. They use the techniques of the present section, coupled with many more ways of mixing the bits.
Until recently, most hash functions used a form of the Merkle-Damgård construction. It was invented independently by Ralph Merkle in 1979 and Ivan Damgård in 1989. The main ingredient is a function , usually called a compression function. It takes two bitstrings as inputs, call them and , and outputs a bitstring of the same length as . For example, could have length 512 and could have length . These are the sizes that the hash function SHA-256 uses, and we’ll use them for concreteness. The message that is to be hashed is suitably padded so that its length is a multiple of 512, and then broken into blocks of length 512:
An initial value is set. Then the blocks are fed one-by-one into and the final output is the hash value:
This construction is very natural: The blocks are read from the message one at a time and stirred into the mix with the previous blocks. The final result is the hash value.
Over the years, some disadvantages of the method have been discovered. One is called the length extension attack. For example, suppose Alice wants to ensure that her message to Bob has not been tampered with. They both have a secret key , so Alice prepends with to get . She sends both and to Bob. Since Bob also knows , he computes and checks that it agrees with the hash value sent by Alice. If so, Bob concludes that the message is authentic.
Since Eve does not know , she cannot send her own message along with . But, because of the iterative form of the hash function, Eve can append blocks to if she intercepts Alice’s communication. Then Eve sends
to Bob. Since she knows , she can produce a message that Bob will regard as authentic. Of course, this attack can be thwarted by using instead of , but it points to a weakness in the construction.
However, using might also cause problems if Eve discovers a way of producing collisions for . Namely, Eve finds and with . If Eve can arrange this so that is a good message and is a bad message (see Section 12.1), then Eve arranges for Alice to authenticate by computing , which equals . This means that Alice has also authenticated .
Another attack was given by Daum and Luks in 2005. Suppose Alice is using a high-level document language such as PostScript, which is really a program rather than just a text file. The file begins with a preamble that identifies the file as PostScript and gives some instructions. Then the content of the file follows.
Suppose Eve is able to find random strings and such that
where instructs the PostScript program to put the string in a certain register. In other words, we are assuming that Eve has found a collision of this form. If any string is appended to these messages, there is still a collision
because of the iterative nature of the hash algorithm (we are ignoring the effects of padding).
Of course, Eve has an evil document , perhaps saying that Alice (who is a bank president) gives Eve access to the bank’s vault. Eve also produces a document that Alice will be willing to sign, for example, a petition to give bank presidents tax breaks. Eve then produces two messages:
For example, puts into a stack, then puts in . They are not equal, so is produced. Eve now has two Postscript files, and , with . As we’ll see in Chapter 13, it is standard for Alice to sign the hash of a message rather than the message itself. Eve shows to Alice, who compiles it. The output is the petition that Alice is happy to sign. So Alice signs . But this means Alice has also signed . Eve takes to the bank, along with Alice’s signature on its hash value. The security officer at the bank checks that the signature is valid, then opens the document, which says that Alice grants Eve access to the bank’s vault. This potentially costly forgery relies on Eve being able to find a collision, but again it shows a weakness in the construction if there is a possibility of finding collisions.
In this section and the next, we look at what is involved in making a real cryptographic hash function. Unlike block ciphers, where there are many block ciphers to choose from, there are only a few hash functions that are used in practice. The most notable of these are the Secure Hash Algorithm (SHA) family, the Message Digest (MD) family, and the RIPEMD-160 message digest algorithm. The original MD algorithm was never published, and the first MD algorithm to be published was MD2, followed by MD4 and MD5. Weaknesses in MD2 and MD4 were found, and MD5 was proposed by Ron Rivest as an improvement upon MD4. Collisions have been found for MD5, and the strength of MD5 is now less certain.
The Secure Hash Algorithm was developed by the National Security Agency (NSA) and given to the National Institute of Standards and Technology (NIST). The original version, often referred to as SHA or SHA-0, was published in 1993 as a Federal Information Processing Standard (FIPS 180). SHA contained a weakness that was later uncovered by the NSA, which led to a revised standards document (FIPS 180-1) that was released in 1995. This revised document describes the improved version, SHA-1, which for several years was the hash algorithm recommended by NIST. However, weaknesses started to appear and in 2017, a collision was found (see the discussion in Section 11.1). SHA-1 is now being replaced by a series of more secure versions called SHA-2. They still use the Merkle-Damgård construction. In the next section, we’ll meet SHA-3, which uses a different construction.
The reader is warned that the discussion that follows is fairly technical and is provided in order to give the flavor of what happens inside a hash function.
The SHA-2 family consists of six algorithms: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. The last three digits indicate the number of bits in the output. We’ll describe SHA-256. The other five are very similar.
SHA-256 produces a 256-bit hash and is built upon the same design principles as MD4, MD5, and SHA-1. These hash functions use an iterative procedure. Just as we did earlier, the original message is broken into a set of fixed-size blocks, , where the last block is padded to fill out the block. The message blocks are then processed via a sequence of rounds that use a compression function that combines the current block and the result from the previous round. That is, we start with an initial value , and define . The final is the message digest.
The trick behind building a hash function is to devise a good compression function. This compression function should be built in such a way as to make each input bit affect as many output bits as possible. One main difference between the SHA family and the MD family is that for SHA the input bits are used more often during the course of the hash function than they are for MD4 and MD5. This more conservative approach makes the design of SHA-1 and SHA-2 more secure than either MD4 or MD5, but also makes it a little slower.
In the description of the hash algorithm, we need the following operations on strings of 32 bits:
bitwise “and”, which is bitwise multiplication mod 2, or bitwise minimum.
bitwise “or”, which is bitwise maximum.
bitwise addition mod 2.
changes 1s to 0s and 0s to 1s .
addition of and mod , where and are regarded as integers mod .
rotation of to the right by positions (the end wraps around to the beginning).
shift of to the right by positions, with the first bits becoming 0s (so the bits at the end disappear and do not wrap around).
We also need the following functions that operate on 32-bit strings:
Define initial hash values as follows:
The preceding are written in hexadecimal notation. Each digit or letter represents a string of four bits:
For example, BA1 equals .
These initial hash values are obtained by using the first eight digits of the fractional parts of the square roots of the first eight primes, expressed as “decimals” in base 16. See Exercise 7.
We also need sixty-four 32-bit words
They are the first eight hexadecimal digits of the fractional parts of the cube roots of the first 64 primes.
SHA-256 begins by taking the original message and padding it with the bit followed by a sequence of bits. Enough bits are appended to make the new message bits short of the next highest multiple of bits in length. Following the appending of and s, we append the -bit representation of the length of the message. (This restricts the messages to length less than bits, which is not a problem in practice.)
For example, if the original message has 2800 bits, we add a 1 and 207 0s to obtain a new message of length . Since in binary, we append fifty-two 0s followed by 101011110000 to obtain a message of length 3072. This is broken into six blocks of length 512.
Break the message with padding into blocks of length 512:
The hash algorithm inputs these blocks one by one. In the algorithm, each 512-bit block is divided into sixteen 32-bit blocks:
There are eight 32-bit registers, labeled . These contain the intermediate hash values. The algorithm inputs a block of 512 bits from the message in Step 11, and in Steps 12 through 24, it stirs the bits of this block into a mix with the bits from the current intermediate hash values. After 64 iterations of this stirring, the algorithm produces an output that is added (mod ) onto the previous intermediate hash values to yield the new hash values. After all of the blocks of the message have been processed, the final intermediate hash values give the hash of the message.
The basic building block of the algorithm is the set of operations that take place on the subregisters in Steps 15 through 24. They take the subregisters and operate on them using rotations, XORs, and other similar operations.
For more details on hash functions, and for some of the theory involved in their construction, see [Stinson], [Schneier], and [Menezes et al.].
In 2006, NIST announced a competition to produce a new hash function to serve alongside SHA-2. The new function was required to be at least as secure as SHA-2 and to have the same four output possibilities. Fifty-one entries were submitted, and in 2012, Keccak was announced as the winner. It was certified as a standard by NIST in 2012 in FIPS-202. (The name is pronounced “ketchak”. It has been suggested that the name is related to “Kecak,” a type of Balinese dance. Perhaps the movement of the dancers is analogous to the movement of the bits during the algorithm.) This algorithm became the hash function SHA-3.
The SHA-3 algorithm was developed by Guido Bertoni, Joan Daemen, and Gilles Van Assche from STMicroelectronics and Michaël Peeters from NXP Semiconductors. It differs from the Merkle-Damgård construction and is based on the theory of Sponge Functions. The idea is that the first part of the algorithm absorbs the message, and then the hash value is squeezed out. Here is how it works. The state of the machine is a string of bits, which is fed to a function that takes an input of bits and outputs a string of bits, thus producing a new state of the machine. In contrast to the compression functions in the Merkle-Damgård construction, the function is a one-to-one function. Such a function could not be used in the Merkle-Damgård situation since the number of input bits (from and the previous step) is greater than the number of output bits. But the different construction in the present case allows it.
Parameters (“the rate”) and (“the capacity”) are chosen so that . The message (written in binary) is padded so that its length is a multiple of , then is broken into blocks of length :
To start, the state is initialized to all 0s. The absorption stage is the first part of Figure 11.2.
After the absorption is finished, the hash value is squeezed out: bits are output and truncated to the bits that are used as the hash value. This is the second part of Figure 11.2.
Producing a SHA-3 hash value requires only one squeeze. However, the algorithm can also be used with multiple squeezes to produce arbitrarily long pseudorandom bitstrings. When it is used this way, it is often called SHAKE (= Secure Hash Algorithm with Keccak).
The “length extension” and collision-based attacks of Section 11.3 are less likely to succeed. Suppose two messages yield the same hash value. This means that when the absorption has finished and the squeezing stage is starting, there are bits of the state that agree with these bits for the other message. But there are at least bits that are not output, and there is no reason that these bits match. If, instead of starting the squeezing, you do another round of absorption, the differing bits will cause the subsequent states and the outputted hash values to differ. In other words, there are at least possible internal states for any given -bit output .
SHA-3 has four different versions, named SHA3-224, SHA3-256, SHA3-384, and SHA3-512. For SHA3-, the denotes the security level, measured in bits. For example, SHA3-256 is expected to require around operations to find a collision or a preimage. Since , this should be impossible well into the future. The parameters are taken to be
For SHA3-256, these are
The same function is used in all versions, which means that it is easy to change from one security level to another. Note that there is a trade-off between speed and security. If the security parameter is increased, then decreases, so the message is read slower, since it is read in blocks of bits.
In the following, we concentrate on SHA3-256. The other versions are obtained by suitably varying the parameters. For more details, see [FIPS 202].
The Padding. We start with a message . The message is read in blocks of bits, so we want the message to have length that is a multiple of 1088. But first the message is padded to . This is for “domain separation.” There are other uses of the Keccak algorithm such as SHAKE (mentioned above), and for these other purposes, is padded differently, for example with 1111. This initial padding makes it very likely that using in different situations yields different outputs. Next, “10*1 padding” is used. This means that first a 1 is appended to 011 to yield 011 . Then sufficiently many 01 s are appended to make the total length one less than a multiple of 1088. Finally, a 1 is appended. We can now divide the result into blocks of length 1088.
Why are these choices made for the padding? Why not simply append enough 0s s to get the desired length? Suppose that . Then . Now append 1079 zeros to get the block to be hashed. If is being used in SHAKE, then is padded with 1080 zeros to yield the same block. This means that the outputs for and are equal. The padding is designed to avoid all such situations.
Absorption and Squeezing. From now on, we assume that the padding has been done and we have blocks of length :
The absorption now proceeds as in Figure 11.2 (we describe the function later).
The initial state is a string of 0s s of length 1600.
For to , let , where denotes a string of 0s of length . What this does is XOR the message block with the first bits of , and then apply the function . This yields an updated state , which is modified during each iteration of the index .
Return .
The squeezing now proceeds as in Figure 11.2:
Input and let be the empty string.
While (where is the output size)
Let , where denotes the first bits of .
.
Return .
The bitstring is the 256-bit hash value SHA-. For the hash value, we need only one squeeze to obtain . But the algorithm could also be used to produce a much longer pseudorandom bitstring, in which case several squeezes might be needed.
The function . The main component of the algorithm is the function , which we now describe. The input to is the 1600-bit state of the machine
where each is a bit. It’s easiest to think of these bits as forming a three-dimensional array with coordinates satisfying
A “column” consists of the five bits with fixed . A “row” consists of the five bits with fixed . A “lane” consists of the 64 bits with fixed .
When we write “for all ” we mean for and , and similarly for other combinations of .
The correspondence between and is given by
for all . For example, . The ordering of the indices could be described as “lexicographic” order using (not ), since the index of corresponding to is smaller than the index for if precedes in “alphabetic order.”
The coordinates are taken to be numbers mod 5, and is mod 64. For example is taken to be , since mod 5, mod 5, and mod 64.
The computation of proceeds in several steps. The steps receive the array as input and they output a modified array to replace .
The following steps through are repeated for to :
The first step XORs the bits in a column with the parities of two nearby columns.
For all , let
This gives the “parity” of the bitstring formed by the five bits in the column.
For all , let .
For all , let .
The second step rotates the 64 bits in each lane by an amount depending on the lane:
For all , let .
Let .
For to
For all , let .
Let
Return .
For example, consider the bits with coordinates of the form . They are handled by the case and we have , so the bits in this lane are rotated by one position. Then, in Step 3(b), is changed to for the iteration with . We have , so this lane is rotated by three positions. Then is changed to , which is reduced mod 5 to , and we pass to , which gives a rotation by six (the rotations are by what are known as “triangular numbers”). After , all of the lanes have been rotated.
The third step rearranges the positions of the lanes:
For all , let .
Again, the coordinate should be reduced mod 5.
The next step is the only nonlinear step in the algorithm. It XORs each bit with an expression formed from two other bits in its row.
For all , let
The multiplication is multiplying two binary bits, hence is the same as the AND operator.
Finally, some bits in the lane are modified.
For all , let .
Set .
For to 6, let , where is an auxiliary function defined below.
For all , let .
Return .
After I through V are completed for one value of , the next value of is used and I through V are repeated for the new , through . The final output is the new array , which yields a new bitstring of length .
This completes the description of the function , except that we still need to describe the auxiliary function .
The function takes an integer mod 255 as input and outputs a bit according to the following algorithm:
If mod 255, return 1 . Else
10000000
For to mod 255
Let
Let
Let
Let
Let
Let .
Return .
The bit that is outputted is .
Let be a prime and let be an integer with . Let . Explain why is not a good cryptographic hash function.
Alice claims that she knows who will win the next World Cup. She takes the name of the team, , and encrypts it with a one-time pad , and sends to Bob. After the World Cup is finished, Alice reveals , and Bob computes to determine Alice’s guess. Why should Bob not believe that Alice actually guessed the correct team, even if is correct?
To keep Alice from changing , Bob requires Alice to send not only but also , where is a good cryptographic hash function. How does the use of the hash function convince Bob that Alice is not changing ?
In the procedure in (b), Bob receives and . Show how he can determine Alice’s prediction, without needing Alice to send ? (Hint: There are fewer than 100 teams that could win the World Cup.)
Let be the product of two distinct large primes and let .
Why is preimage resistant? (Of course, there are some values, such as for which it is easy to find a preimage. But usually it is difficult.)
Why is not strongly collision resistant?
Let be a cryptographic hash function. Nelson tries to make new hash functions.
He takes a large prime and a primitive root for . For an input , he computes , then sets . The function is not fast enough to be a hash function. Find one other property of hash functions that fails for and one that holds for , and justify your answers.
Since his function in part (a) is not fast enough, Nelson tries using . This is very fast. Find one other property of hash functions that holds for and one that fails for , and justify your answers.
Suppose a message is divided into blocks of length 160 bits: . Let . Which of the properties (1), (2), (3) for a hash function does satisfy and which does it not satisfy? Justify your answers.
One way of storing and verifying passwords is the following. A file contains a user’s login id plus the hash of the user’s password. When the user logs in, the computer checks to see if the hash of the password is the same as the hash stored in the file. The password is not stored in the file. Assume that the hash function is a good cryptographic hash function.
Suppose Eve is able to look at the file. What property of the hash function prevents Eve from finding a password that will be accepted as valid?
When the user logs in, and the hash of the user’s password matches the hash stored in the file, what property of the hash function says that the user probably entered the correct password? (Hint: Your answers to (a) and (b) should not be the same.)
The initial values in SHA-256 are extracted from the hexadecimal expansions of the fractional parts of the square roots of the first eight primes. Here is what that means.
Compute and write the answer in hexadecimal. The answer should be .
Do a similar computation with replaced by , , and and compare with the appropriate values of .
Alice and Bob (and no one else) share a key . Each time that Alice wants to make sure that she is communicating with Bob, she sends him a random string of 100 bits. Bob computes , where is a good cryptographic hash function, and sends to Alice. Alice computes . If this matches what Bob sent her, she is convinced that she is communicating with Bob.
What property of convinces Alice that she is communicating with Bob?
Suppose Alice’s random number generator is broken and she sends the same each time she communicates with anyone. How can Eve (who doesn’t know , but who intercepts all communications between Alice and Bob) convince Alice that she is Bob?
Show that neither of the two hash functions of Section 11.2 is preimage resistant. That is, given an arbitrary (of the appropriate length), show how to find an input whose hash is .
Find a collision for each of the two hash functions of Section 11.2.
An unenlightened professor asks his students to memorize the first 1000 digits of for the exam. To grade the exam, he uses a 100-digit cryptographic hash function . Instead of carefully reading the students’ answers, he hashes each of them individually to obtain binary strings of length 100. Your score on the exam is the number of bits of the hash of your answer that agree with the corresponding bits of the hash of the correct answer.
If someone gets 100% on the exam, why is the professor confident that the student’s answer is correct?
Suppose each student gets every digit of wrong (a very unlikely occurrence!), and they all have different answers. Approximately what should the average on the exam be?
A bank in Tokyo is sending a terabyte of data to a bank in New York. There could be transmission errors. Therefore, the bank in Tokyo uses a cryptographic hash function and computes the hash of the data. This hash value is sent to the bank in New York. The bank in New York computes the hash of the data received. If this matches the hash value sent from Tokyo, the New York bank decides that there was no transmission error. What property of cryptographic hash functions allows the bank to decide this?
(Thanks to Danna Doratotaj for suggesting this problem.)
Let map 256-bit strings to 256-bit strings by . Show that is not preimage resistant but it is collision resistant.
Let be a good cryptographic hash function with a 256-bit output. Define a map from binary strings of arbitrary length to binary strings of length 257, as follows. If , let (that is, with appended). If , let . Show that is collision resistant, and show that if is a randomly chosen binary string of length 257, then the probability is at least 50% that you can easily find with .
The functions and show that collision resistance does not imply preimage resistance quite as obviously as one might suspect.
Show that the computation of in Keccak (see the end of Section 11.5) can be given by an LFSR.
Let be the function on 32-bit strings in the description of SHA-2.
Suppose that the first bit of and the first bit of are 1 and the first bit of is arbitrary. Show that the first bit of is 1.
Suppose that the first bit of and the first bit of are 0 and the first bit of is arbitrary. Show that the first bit of is 0.
This shows that Maj gives the bitwise majority (for 0 vs. 1) for the strings .
Let be an iterative hash function that operates successively on input blocks of 512 bits. In particular, there is a compression function and an initial value . The hash of a message of 1024 bits is computed by , and . Suppose we have found a collision for some 512-bit blocks and . Choose distinct primes and , each of approximately 240 bits. Regard and as numbers between 0 and .
Show that there exists an with such that
Show that if , then is approximately , and similarly for . (Assume that and are approximately .)
Use the Prime Number Theorem (see Section 3.1) to show that the probability that is prime is approximately and the probability that both and are prime is about .
Show that it is likely that there is some with such that both and are primes.
Show that and satisfy .
This method of producing two RSA moduli with the same hash values is based on the method of [Lenstra et al.] for using a collision to produce two X.509 certificates with the same hashes. The method presented here produces moduli with and of significantly different sizes (240 bits and 784 bits), but an adversary does not know this without factoring .
If there are 23 people in a room, the probability is slightly more than 50% that two of them have the same birthday. If there are 30, the probability is around 70%. This might seem surprising; it is called the birthday paradox. Let’s see why it’s true. We’ll ignore leap years (which would slightly lower the probability of a match) and we assume that all birthdays are equally likely (if not, the probability of a match would be slightly higher).
Consider the case of 23 people. We’ll compute the probability that they all have different birthdays. Line them up in a row. The first person uses up one day, so the second person has probability of having a different birthday. There are two days removed for the third person, so the probability is that the third birthday differs from the first two. Therefore, the probability of all three people having different birthdays is . Continuing in this way, we see that the probability that all 23 people have different birthdays is
Therefore, the probability of at least two having the same birthday is
One way to understand the preceding calculation intuitively is to consider the case of 40 people. If the first 30 have a match, we’re done, so suppose the first 30 have different birthdays. Now we have to choose the last 10 birthdays. Since 30 birthdays are already chosen, we have approximately a 10% chance that a randomly chosen birthday will match one of the first 30. And we are choosing 10 birthdays. Therefore, it shouldn’t be too surprising that we get a match. In fact, the probability is 89% that there is a match among 40 people.
More generally, suppose we have objects, where is large. There are people, and each chooses an object (with replacement, so several people could choose the same one). Then
Note that this is only an approximation that holds for large ; for small it is better to use the above product and obtain an exact answer. In Exercise 12, we derive this approximation. Choosing , we find that if , then the probability is 50% that at least two people choose the same object.
To summarize, if there are possibilities and we have a list of length , then there is a good chance of a match. If we want to increase the chance of a match, we can make the list have length or . The main point is that a length of a constant times (instead of something like ) suffices.
For example, suppose we have 40 license plates, each ending in a three-digit number. What is the probability that two of the license plates end in the same three digits? We have , the number of possible three-digit numbers, and , the number of license plates under consideration. Since
the approximate probability of a match is
which is more than 50%. We stress that this is only an approximation. The correct answer is obtained by calculating
The next time you are stuck in traffic (and have a passenger to record numbers), check out this prediction.
But what is the probability that one of these 40 license plates has the same last three digits as yours (assuming that yours ends in three digits)? Each plate has probability of not matching yours, so the probability is that none of the 40 plates matches your plate. The reason the birthday paradox works is that we are not just looking for matches between one fixed plate, such as yours, and the other plates. We are looking for matches between any two plates in the set, so there are many more opportunities for matches.
For more examples, see Examples 36 and 37 in the Computer Appendices.
The applications of these ideas to cryptology require a slightly different setup. Suppose there are two rooms, each with 30 people. What is the probability that someone in the first room has the same birthday as someone in the second room? More generally, suppose there are objects and there are two groups of people. Each person from each group selects an object (with replacement). What is the probability that someone from the first group chooses the same object as someone from the second group? In this case,
If , then the probability of exactly matches is approximately . An analysis of this problem, with generalizations, is given in [Girault et al.]. Note that the present situation differs from the earlier problem of finding a match in one set of people. Here, we have two sets of people, so a total of people. Therefore, the probability of a match in this set is approximately . But around half of the time, these matches are between members of the same group, and half the time the matches are the desired ones, namely, between the two groups. The precise effect is to cut the probability down to .
Again, if there are possibilities and we have two lists of length , then there is a good chance of a match. Also, if we want to increase the chance of a match, we can make the lists have length or . The main point is that a length of a constant times (instead of something like ) suffices.
For example, if we take and , then
Since , there is approximately a 91.5% probability that someone in one group of 30 people has the same birthday as someone in a second group of 30 people.
The birthday attack can be used to find collisions for hash functions if the output of the hash function is not sufficiently large. Suppose that is an -bit hash function. Then there are possible outputs. Make a list for approximately random choices of . Then we have the situation of “people” with possible “birthdays,” so there is a good chance of having two values and with the same hash value. If we make the list longer, for example values of , the probability becomes very high that there is a match.
Similarly, suppose we have two sets of inputs, and . If we compute for approximately randomly chosen and compute for approximately randomly chosen , then we expect some value to be equal to some value . This situation will arise in an attack on signature schemes in Chapter 13, where will be a set of good documents and will be a set of fraudulent documents.
If the output of the hash function is around bits, the above attacks have a high chance of success. It is necessary to make lists of length approximately and to store them. This is possible on most computers. However, if the hash function outputs 256-bit values, then the lists have length around , which is too large, both in time and in memory.
Suppose we are working with a large prime and want to evaluate . In other words, we want to solve . We can do this with high probability by a birthday attack.
Make two lists, both of length around :
The first list contains numbers for approximately randomly chosen values of .
The second list contains numbers for approximately randomly chosen values of .
There is a good chance that there is a match between some element on the first list and some element on the second list. If so, we have
Therefore, is the desired discrete logarithm.
Let’s compare this method with the Baby Step, Giant Step (BSGS) method described in Section 10.2. Both methods have running time and storage space proportional to . However, the BSGS algorithm is deterministic, which means that it is guaranteed to produce an answer. The birthday algorithm is probabilistic, which means that it probably produces an answer, but this is not guaranteed. Moreover, there is a computational advantage to the BSGS algorithm. Computing one member of a list from a previous one requires one multiplication (by or by ). In the birthday algorithm, the exponent is chosen randomly, so must be computed each time. This makes the algorithm slower. Therefore, the BSGS algorithm is somewhat superior to the birthday method.
In this section, we show that the iterative nature of hash algorithms based on the Merkle-Damgård construction makes them less resistant than expected to finding multicollisions, namely inputs all with the same hash value. This was pointed out by Joux [Joux], who also gave implications for properties of concatenated hash functions, which we discuss below.
Suppose there are people and there are possible birthdays. It can be shown that if , then there is a good chance of at least people having the same birthday. In other words, we expect a -collision. If the output of a hash function is random, then we expect that this estimate holds for -collisions of hash function values. Namely, if a hash function has -bit outputs, hence possible values, and if we calculate values of the hash function, we expect a -collision. However, in the following, we’ll show that often we can obtain collisions much more easily.
In many hash functions, for example, SHA-256, there is a compression function that operates on inputs of a fixed length. Also, there is a fixed initial value . The message is padded to obtain the desired format, then the following steps are performed:
Split the message into blocks .
Let be the initial value .
For , let .
Let .
In SHA-256, the compression function is described in Section 11.4. For each iteration, it takes a 256-bit input from the preceding iteration along with a message block of length 512 and outputs a new string of length 256.
Suppose the output of the function , and therefore also of the hash function , has bits. A birthday attack can find, in approximately steps, two blocks and such that . Let . A second birthday attack finds blocks and with . Continuing in this manner, we let
and use a birthday attack to find and with
This process is continued until we have pairs of blocks , , where is some integer to be determined later.
We claim that each of the messages
(all possible combinations with and ) has the same hash value. This is because of the iterative nature of the hash algorithm. At each calculation , the same value is obtained whether or . Therefore, the output of the function during each step of the hash algorithm is independent of whether an or an is used. Therefore, the final output of the hash algorithm is the same for all messages. We thus have a -collision.
This procedure takes approximately steps and has an expected running time of approximately a constant times (see Exercise 13). Let , for example. Then it takes only around twice as long to find four messages with same hash value as it took to find two messages with the same hash. If the output of the hash function were truly random, rather than produced, for example, by an iterative algorithm, then the above procedure would not work. The expected time to find four messages with the same hash would then be approximately , which is much longer than the time it takes to find two colliding messages. Therefore, it is easier to find multicollisions with an iterative hash algorithm.
An interesting consequence of the preceding discussion relates to attempts to improve hash functions by concatenating their outputs. Suppose we have two hash functions and . Before [Joux] appeared, the general wisdom was that the concatenation
should be a significantly stronger hash function than either or individually. This would allow people to use somewhat weak hash functions to build much stronger ones. However, it now seems that this is not the case. Suppose the output of has bits. Also, assume that is calculated by an iterative algorithm, as in the preceding discussion. No assumptions are needed for . We may even assume that it is a random oracle, in the sense of Section 12.3. In time approximately , we can find messages that all have the same hash value for . We then compute the value of for each of these messages. By the birthday paradox, we expect to find a match among these values of . Since these messages all have the same value, we have a collision for . Therefore, in time proportional to (we’ll explain this estimate shortly), we expect to be able to find a collision for . This is not much longer than the time a birthday attack takes to find a collision for the longer of and , and is much faster than the time that a standard birthday attack would take on this concatenated hash function.
How did we get the estimate for the running time? We used steps to get the messages with the same value. Each of these messages consisted of blocks of a fixed length. We then evaluated for each of these messages. For almost every hash function, the evaluation time is proportional to the length of the input. Therefore, the evaluation time is proportional to for each of the messages that are given to . This gives the term in the estimated running time.
Ideally, a hash function is indistinguishable from a random function. The random oracle model, introduced in 1993 by Bellare and Rogaway [Bellare-Rogaway], gives a convenient method for analyzing the security of cryptographic algorithms that use hash functions by treating hash functions as random oracles.
A random oracle acts as follows. Anyone can give it an input, and it will produce a fixed length output. If the input has already been asked previously by someone, then the oracle outputs the same value as it did before. If the input is not one that has previously been given to the oracle, then the oracle gives a randomly chosen output. For example, it could flip fair coins and use the result to produce an -bit output.
For practical reasons, a random oracle cannot be used in most cryptographic algorithms; however, assuming that a hash function behaves like a random oracle allows us to analyze the security of many cryptosystems that use hash functions.
We already made such an assumption in Section 12.1. When calculating the probability that a birthday attack finds collisions for a hash function, we assumed that the output of the hash function is randomly and uniformly distributed among all possible outcomes. If this is not the case, so the hash function has some values that tend to occur more frequently than others, then the probability of finding collisions is somewhat higher (for example, consider the extreme case of a really bad hash function that, with high probability, outputs only one value). Therefore, our estimate for the probability of collisions really only applies to an idealized setting. In practice, the use of actual hash functions probably produces very slightly more collisions.
In the following, we show how the random oracle model is used to analyze the security of a cryptosystem. Because the ciphertext is much longer than the plaintext, the system we describe is not as efficient as methods such as OAEP (see Section 9.2). However, the present system is a good illustration of the use of the random oracle model.
Let be a one-way one-to-one function that Bob knows how to invert. For example, , where is Bob’s public RSA key. Let be a hash function. To encrypt a message , which is assumed to have the same bitlength as the output of , Alice chooses a random integer mod and lets the ciphertext be
When Bob receives , he computes
It is easy to see that this decryption produces the original message .
Let’s assume that the hash function is a random oracle. We’ll show that if Alice can succeed with significantly better than 50% probability, then she can invert with significantly better than zero probability. Therefore, if is truly a one-way function, the cryptosystem has the ciphertext indistinguishability property. To test this property, Alice and Carla play the CI Game from Section 4.5. Carla chooses two ciphertexts, and , and gives them to Alice. Alice randomly chooses or and encrypts , yielding the ciphertext . She gives to Carla, who tries to guess whether or . Suppose that
Let be the set of for which Carla can compute . If , then Carla computes the value such that . She then asks the random oracle for the value , computes , and obtains . Therefore, when , Carla guesses correctly.
If then Carla does not know the value of . Since is a random oracle, the possible values of are randomly and uniformly distributed among all possible outputs, so is the same as encrypting with a one-time pad. As we saw in Section 4.4, this means that gives Alice no information about whether it comes from or from . So if Alice has probability of guessing the correct plaintext.
Therefore
It follows that .
If we assume that it is computationally feasible for Alice to find with probability at most , then we conclude that it is computationally feasible for Alice to guess correctly with probability . Therefore, if the function is one-way, then the cryptosystem has the ciphertext indistinguishability property.
Note that it was important in the argument to assume that the values of are randomly and uniformly distributed. If this were not the case, so the hash function had some bias, then Alice might have some method for guessing correctly with better than 50% probability when Therefore, the assumption that the hash function is a random oracle is important.
Of course, a good hash function is probably close to acting like a random oracle. In this case, the above argument shows that the cryptosystem with an actual hash function should be fairly resistant to Alice guessing correctly. However, it should be noted that Canetti, Goldreich, and Halevi [Canetti et al.] have constructed a cryptosystem that is secure in the random oracle model but is not secure for any concrete choice of hash function. Fortunately, this construction is not one that would be used in practice.
The above procedure of reducing the security of a system to the solvability of some fundamental problem, such as the non-invertibility of a one-way function, is common in proofs of security. For example, in Section 10.5, we reduced certain questions for the ElGamal public key cryptosystem to the solvability of Diffie-Hellman problems.
Section 12.2 shows that most hash functions do not behave as random oracles with respect to multicollisions. This indicates that some care is needed when applying the random oracle model.
The use of the random oracle model in analyzing a cryptosystem is somewhat controversial. However, many people feel that it gives some indication of the strength of the system. If a system is not secure in the random oracle model, then it surely is not safe in practice. The controversy arises when a system is proved secure in the random oracle model. What does this say about the security of actual implementations? Different cryptographers will give different answers. However, at present, there seems to be no better method of analyzing the security that works widely.
Cryptographic hash functions are some of the most widely used cryptographic tools, perhaps second only to block ciphers. They find applications in many different areas of information security. Later, in Chapter 13, we shall see an application of hash functions to digital signatures, where the fact that they shrink the representation of data makes the operation of creating a digital signature more efficient. We now look at how they may be used to serve the role of a cipher by providing data confidentiality.
A cryptographic hash function takes an input of arbitrary length and provides a fixed-size output that appears random. In particular, if we have two distinct inputs, then their hashes should be different. Generally, their hashes are very different. This is a property that hash functions share with good ciphers and is a property that allows us to use a hash function to perform encryption.
Using a hash function to perform encryption is very similar to a stream cipher in which the output of a pseudorandom number generator is XORed with the plaintext. We saw such an example when we studied the output feedback mode (OFB) of a block cipher. Much like the block cipher did for OFB, the hash function creates a pseudorandom bit stream that is XORed with the plaintext to create a ciphertext.
In order to make a cryptographic hash function operate as a stream cipher, we need two components: a key shared between Alice and Bob, and an initialization vector. We shall soon address the issue of the initialization vector, but for now let us begin by assuming that Alice and Bob have established a shared secret key .
Now, Alice could create a pseudorandom byte by taking the leftmost byte of the hash of that is, . She could then encrypt a byte of plaintext by XORing with the random byte to produce a byte of ciphertext
But if she has more than one byte of plaintext, then how should continue? We use feedback, much like we did in OFB mode. The next pseudorandom byte should be created by . Then the next ciphertext byte can be created by
In general, the pseudorandom byte is created by , and encryption is simply XORing with the plaintext . Decryption is a simple matter, as Bob must merely recreate the bytes and XOR with the ciphertext to get out the plaintext .
There is a simple problem with this procedure for encryption and decryption. What if Alice wants to encrypt a message on Monday, and a different message on Wednesday? How should she create the pseudorandom bytes? If she starts all over, then the pseudorandom sequence on Monday and Wednesday will be the same. This is not desirable.
Instead, we must introduce some randomness to make certain the two bit streams are different. Thus, each time Alice sends a message, she should choose a random initialization vector, which we denote by . She then starts by creating and proceeding as before. But now she must send to Bob, which she can do when she sends . If Eve intercepts , she is still not able to compute since she doesn’t know . In fact, if is a good hash function, then should give no information about .
The idea of using a hash function to create an encryption procedure can be modified to create an encryption procedure that incorporates the plaintext, much in the same way as the CFB mode does.
When Alice sends a message to Bob, two important considerations are
Is the message really from Alice?
Has the message been changed during transmission?
Message authentication codes (MAC) solve these problems. Just as is done with digital signatures, Alice creates an appendix, the MAC, that is attached to the message. Thus, Alice sends to Bob.
One of the most commonly used is HMAC ( Hashed MAC), which was invented in 1996 by Mihir Bellare, Ran Canetti, and Hugo Krawczyk and is used in the IPSec and TLS protocols for secure authenticated communications.
To set up the protocol, we need a hash function . For concreteness, assume that processes messages in blocks of 512 bits. Alice and Bob need to share a secret key . If is shorter than 512 bits, append enough 0s to make its length be 512. If is longer than 512 bits, it can be hashed to obtain a shorter key, so we assume that has 512 bits.
We also form innerpadding and outerpadding strings:
opad = 5C5C5C … 5C, ipad = 363636 … 36,
which are binary strings of length 512, expressed in hexadecimal (5=0101, C=1100, etc.).
Let be the message. Then Alice computes
In other words, Alice first does the natural step of computing . But, as pointed out in Section 11.3, this is susceptible to length extension attacks for hash functions based on the Merkle-Damgård construction. So she prepends and hashes again. This seems to be resistant to all known attacks.
Alice now sends the message , either encrypted or not, along with , to Bob. Since Bob knows , he can compute . If it agrees with the value that Alice sent, he assumes that the message is from Alice and that it is the message that Alice sent.
If Eve tries to change to , then should differ from (collision resistance) and therefore should differ from (again, by collision resistance). Therefore, tells Bob that the message is authentic. Also, it seems unlikely that Eve can authenticate a message without knowing , so Bob is sure that Alice sent the message.
For an analysis of the security of HMAC, see [Bellare et al.]
An alternative to using hash functions is to use a block cipher in CBC mode. Let be a block cipher, such as AES, using a secret key that is shared between Alice and Bob. We may create an appendix very similar to the output of a keyed hash function by applying in CBC mode to the entire message and using the final output of the CBC operation as the MAC.
Suppose that our message is of the form , where each is a block of the message that has the same length as the block length of the encryption algorithm . The last block may require padding in order to guarantee that it is a full block. Recall that the CBC encryption procedure is given by
where is the initialization vector. The CBC-MAC is then given as
and Alice then sends Bob .
CBC-MAC has many of the same concerns as keyed hash functions. For example, CBC-MAC also suffers from its own form of length extension attacks. Let correspond to the CBC-MAC of a message . Suppose that Eve has found two messages and with . Then Eve knows that for an additional block . If Eve can convince Alice to authenticate , she can swap with to create a forged document that appears valid. Of course, the trick is in convincing Alice to authenticate , but it nonetheless is a concern with CBC-MAC.
When using CBC-MAC, one should also be careful not to use the key for purposes other than authentication. A different key must be used for message confidentiality than for calculating the message authentication code. In particular, if one uses the same key for confidentiality as for authentication, then the output of CBC blocks during the encryption of a message for confidentiality could be the basis for a forgery attack against CBC-MAC.
We now look at how the communications take place in a password-based authentication protocol, as is commonly used to log onto computer systems. Password-based authentication is a popular approach to authentication because it is much easier for people to remember a phrase (the password) than it is to remember a long, cryptographic response.
In a password protocol, we have a user, Alice, and a verifier, Veronica. Veronica is often called a host and might, for example, be a computer terminal or a smartphone or an email service. Alice and Veronica share knowledge of the password, which is a long-term secret that typically belongs to a much smaller space of possibilities than cryptographic keys and thus passwords have small entropy (that is, much less randomness is in a password).
Veronica keeps a list of all users and their passwords . For many hosts this list might be small since only a few users might log into a particular machine, while in other cases the password list can be more substantial. For example, email services must maintain a very large list of users and their passwords. When Alice wants to log onto Veronica’s service, she must contact Veronica and tell her who she is and give her password. Veronica, in turn, will check to see if Alice’s password is legitimate.
A basic password protocol proceeds in several steps:
Alice Veronica: “Hello, I am Alice”
Veronica Alice: “Password?”
Alice Veronica:
Veronica: Examines password file and verifies that the pair belongs to the password file. Service is granted if the password is confirmed.
This protocol is very straightforward and, as it stands, has many flaws that should stand out. Perhaps one of the most obvious problems is that there is no mutual authentication between Alice and Veronica; that is, Alice does not know that she is actually communicating with the legitimate Veronica and, vice versa, Veronica does not actually have any proof that she is talking with Alice. While the purpose of the password exchange is to authenticate Alice, in truth there is no protection from message replay attacks, and thus anyone can pretend to be either Alice or Veronica.
Another glaring problem is the lack of confidentiality in the protocol. Any eavesdropper who witnesses the communication exchange learns Alice’s password and thus can imitate Alice at a later time.
Lastly, another more subtle concern is the storage of the password file. The storage of is a design liability as there are no guarantees that a system administrator will not read the password file and leak passwords. Similarly, there is no guarantee that Veronica’s system will not be hacked and the password file stolen.
We would like the passwords to be protected while they are stored in the password file. Needham proposed that, instead of storing , Veronica should store , where is a one-way function that is difficult to invert, such as a cryptographic hash function. In this case, Step 4 involves Veronica checking against . This basic scheme is what was used in the original Unix password system, where the function was the variant of DES that used Salt (see Chapter 7). Now, an adversary who gets the file can’t use this to respond in Step 3.
Nevertheless, although the revised protocol protects the password file, it does not address the eavesdropping concern. While we could attempt to address eavesdropping using encryption, such an approach would require an additional secret to be shared between Alice and Veronica (namely an encryption key). In the following, we present two solutions that do not require additional secret information to be shared between Alice and Veronica.
Alice wants to log in to a server using her password. A common way of doing this is the Secure Remote Password protocol (SRP).
First, the system needs to be set up to recognize Alice and her password. Alice has her login name and password . Also, the server has chosen a large prime such that is also prime (this is to make the discrete logarithm problem mod hard) and a primitive root mod . Finally, a cryptographic hash function such as SHA256 or SHA3 is specified.
A random bitstring is chosen (this is called “salt”), and and are computed. The server discards and , but stores and along with Alice’s identification . Alice saves only and .
When Alice wants to log in, she and the server perform the following steps:
Alice sends to the server and the server retrieves the corresponding and .
The server sends to Alice, who computes .
Alice chooses a random integer mod and computes . She sends to the server.
The server chooses a random mod , computes mod , and sends to Alice.
Both Alice and the server compute .
Alice computes and the server computes as (these yield the same ; see below).
Alice computes and sends to the server, which checks that this agrees with the value of that is computed using the server’s values of .
The server computes and sends to Alice. She checks that this agrees with the value of she computes with her values of .
Both Alice and the server compute and use this as the session key for communications.
Several comments are in order. We’ll number them corresponding to the steps in the protocol.
The server does not directly store or a hash of . The hash of is stored in a form protected by a discrete logarithm problem. Originally, was used and the salt was included to slow down brute force attacks on passwords. Eve can try various values of , trying to match a value of , until someone’s password is found (this is called a “dictionary attack”). If a sufficiently long bitstring is included, this attack becomes infeasible. The current version of SRP includes in the hash to get , so Eve needs to attack an individual entry. Since Eve knows an individual’s salt (if she obtained access to the password file), the salt is essentially part of the identification and does not slow down the attack on the individual’s password.
Sending to Alice means that Alice does not need to remember .
This is essentially the start of a protocol similar to the Diffie-Hellman key exchange.
Earlier versions of SRP used . But this meant that an attacker posing as the server could choose a random , compute , and use , thus allowing the attacker to check whether one of is the hash value . In effect, this could speed up the attack by a factor of 2. The 3 is included to avoid this.
In an earlier version of SRP, was chosen randomly by the server and sent to Alice. However, if the server sends before is received (for example, it might seem more efficient for the server to send both and in Step 2), there is a possible attack. See Exercise 16. The present method of having ensures that is determined after .
Let’s show that the two values of are equal:
and
Therefore, they agree. Note the hints of the Diffie-Hellman protocol, where is computed in two ways.
Since the value of changes for each login, the value of also changes, so an attacker cannot simply reuse some successful .
Up to this point, the server has no assurance that the communications are with Alice. Anyone could have sent Alice’s and sent a random . Alice and the server have computed , but they don’t know that their values agree. If they do, it is very likely that the correct , hence the correct , is being used. Checking shows that the values of agree because of the collision resistance of the hash function. Of course, if the values of don’t agree, then Alice and the server will produce different session keys in the last step, so communications will fail for this reason, too. But it seems better to terminate the protocol earlier if something is wrong.
How does Alice know that she is communicating with the server? This step tells Alice that the server’s value of matches hers, so it is very likely that the entity she is communicating with knows the correct . Of course, someone who has hacked into the password file has all the information that the server has and can therefore masquerade as the server. But otherwise Alice is confident that she is communicating with the server.
At the point, Alice and the server are authenticated to each other. The session key serves as the secret key for communications between Alice and the server during the current session.
Observe that , , and are the only numbers that are transmitted that depend on the password. The value of contains , but this is masked by adding on the random number . The values of and contain , which depends on , but it is safely hidden inside a hash function. Therefore, if is very unlikely that someone who eavesdrops on communications between Alice and the server will obtain any useful information. For more on the security and design considerations, see [Wu1], [Wu2].
Another method was proposed by Lamport. The protocol, which we now introduce, is an example of what is known as a one-time password scheme since each run of the protocol uses a temporary password that can only be used once. Lamport’s one-time password protocol is a good example of a special construction using one-way (specifically, hash) functions that shows up in many different applications.
To start, we assume that Alice has a password , that Veronica has chosen a large integer , and that Alice and Veronica have agreed upon a one-way function . A good choice for such an is a cryptographic hash function, such as SHA256 or SHA3, which we described in Chapter 11. Veronica calculates , and stores Alice’s entry in a password file. Now, when Alice wants to authenticate herself to Veronica, she uses the revised protocol:
Alice Veronica: “Hello, I am Alice.”
Veronica Alice: , “Password?”
Alice Veronica:
Veronica takes , and checks to see whether . If the check passes, then Veronica updates Alice’s entry in the password as , and Veronica grants Alice access to her services.
At first glance, this protocol might seem confusing, and to understand how it works in practice it is useful to write out the following chain of hashes, known as a hash chain:
For the first run of the protocol, in step 2, Veronica will tell Alice and ask Alice the corresponding password that will hash to . In order for Alice to correctly respond, she must calculate , which she can do since she has the original password. After Alice is successfully verified, Veronica will throw away and update the password file to contain .
Now suppose that Eve saw . This won’t help her because the next time the protocol is run, in step 2 Veronica will issue and thereby ask Alice for the corresponding password that will hash to . Although Eve has , she cannot determine the required response .
The protocol continues to run, with Veronica updating her password file until she runs to the end of the hash chain. At that point, Alice and Veronica must renew the password file by changing the initial password to a new password.
This protocol that we have just examined is the basis for the S/Key protocol, which was implemented in Unix operating systems in the 1980s. Although the S/Key protocol’s use of one-time passwords achieves its purpose of protecting the password exchange from eavesdropping, it is nevertheless weak when one considers an active adversary. In particular, the fact that the counter in step 2 is sent in the clear, and the lack of authentication, is the basis for an intruder-in-the-middle attack.
In the intruder-in-the-middle attack described below, Alice intends to communicate with Veronica, but the active adversary Malice intercepts the communications between Alice and Veronica and sends her own.
Alice Malice (Veronica): “Hello, I am Alice.”
Malice Veronica: “Hello, I am Alice.”
Veronica Malice: , “Password?”
Malice Alice: , “Password?”
Alice Malice:
Malice Veronica: .
Veronica takes , and checks to see whether . The check will pass, and Veronica will think she is communicating with Alice, when really she is corresponding with Malice.
One of the problems with the protocol is that there is no authentication of the origin of the messages. Veronica does not know whether she is really communicating with Alice, and likewise Alice does not have a strong guarantee that she is communicating with Veronica.
The lack of origin authentication also provides the means to launch another clever attack, known as the small value attack. In this attack, Malice impersonates Veronica and asks Alice to respond to a small . Then Malice intercepts Alice’s answer and uses that to calculate the rest of the hash chain. For example, if Malice sent , she would be able to calculate , , , and so on. The small value of allows Malice to hijack the majority of the hash chain, and thereby imitate Alice at a later time.
As a final comment, we note that the protocol we described is actually different than what was originally presented by Lamport. In Lamport’s original one-time password protocol, he required that Alice and Veronica keep track of on their own without exchanging . This has the benefit of protecting the protocol from active attacks by Malice where she attempts to use to her advantage. Unfortunately, Lamport’s scheme required that Alice and Veronica stayed synchronized with each other, which in practice turns out to be difficult to ensure.
One variation of the concept of a hash chain is the blockchain. Blockchains are a technology that has garnered a lot of attention since they provide a convenient way to keep track of information in a secure and distributed manner.
Hash chains are iterations of hash functions, while blockchains are hash chains that have extra structure. In order to understand blockchains, we need to look at several different building blocks.
Let us start with hash pointers. A hash pointer is simply a pointer to where some data is stored, combined with a cryptographic hash of the value of the data that is being pointed at. We may visualize a hash pointer as something like Figure 12.1.
The hash pointer is useful in that it gives a means to detect alterations of the block of data. If someone alters the block, then the hash contained in the hash pointer will not match the hash of the altered data block, assuming of course that no one has altered the hash pointer.
If we want to make it harder for someone to alter the hash, then we can create an ordered collection of data blocks, each with a hash pointer to a previous block. This is precisely the idea behind a blockchain. A blockchain is a collection of data blocks and hash pointers arranged in a data structure known as a linked list. Normal linked lists involve series of blocks of data that are each paired with a pointer to a previous block of data and its pointer. Blockchains, however, replace the pointer in a linked list with a hash pointer, as depicted in Figure 12.2.
Blockchains are useful because they allow entities to create a ledger of data that can be continually updated by one or more users. An initial version of the ledger is created, and then subsequent updates to the ledger reference previous updates to the ledger. In order to accomplish this, the basic structure of a blockchain consists of three main components: data, which is the ledger update; a pointer that tells where the previous block is located; and a digest (hash) that allows one to verify that the entire contents of the previous block have not changed. Suppose that a large record of blocks has been stored in a blockchain, such as in Figure 12.2, with the final hash value not paired with any data. What happens, then, if an adversary comes along and wants to modify the data in block ? If the data in block is altered, then the hash contained in block will not match the hash of the modified data in block . This forces the adversary to have to modify the hash in block . But the hash of block was calculated based on the entire block (i.e. including the -th hash), and therefore there will now be a mismatch between the hash of block and the hash pointer in block . This process forces the adversary to continue modifying blocks until the end of the blockchain is reached. This requires a significant effort on the part of the adversary, but is possible since the adversary can calculate the hash values that he needs to replace with. Ultimately, in order to prevent the adversary from succeeding, we require that the final hash value is stored in a manner that prevents the adversary from modifying it, thereby providing a final form of evidence preventing modification to the blockchain.
In practice, the data contained in each block can be quite large, consisting of many data records, and therefore one will often find another hash-based data structure used in blockchains. This data structure uses hash pointers arranged in a binary tree, and is known as a Merkle tree. Merkle trees are useful as they make it easy for someone to prove that a certain piece of data exists within a particular block of data.
In a Merkle tree, one has a collection of records of data that have been arranged as the leaves of a binary tree, such as shown in Figure 12.3. These data records are then grouped in pairs of two, and for each pair two hash pointers are created, one pointing to the left data record and another to the right data record. The collection of hash pointers then serve as the data record in the next level of the tree, which are subsequently grouped in pairs of two. Again, for each pair two hash pointers are created, one pointing to the left record and the other to the right data record. We proceed up the tree until we reach the end, which is a single record that corresponds to the tree’s root. The hash of this record is stored, giving one the ability to make certain that the entire collection of data records contained within the Merkle tree has not been altered.
Now suppose that someone wants to prove to you that a specific data record exists within a block. To do this, all that they need to show you is the data record, along with the hashes on the path from the data record to the root of the Merkle tree. In particular, one does not need to show all of the other data records, one only needs to show the hashes at the higher levels. This process is efficient, requiring roughly items from the Merkle tree to be shown, where is the number of blocks of data recorded. For example, to verify that is in Figure 12.3, someone needs to show you , , (but not ), , the two inputs and the hash at the next level up, and the two inputs and the hash at the top. You can check these hash computations, and, since the hash function is collision-resistant, you can be sure that is there and has not been changed. Otherwise, at some level, a hash value has been changed and a collision has been found.
In a family of four, what is the probability that no two people have birthdays in the same month? (Assume that all months have equal probabilities.)
Each person in the world flips 100 coins and obtains a sequence of length 100 consisting of Heads and Tails. (There are possible sequences.) Assume that there are approximately people in the world. What is the probability that two people obtain the same sequence of Heads and Tails? Your answer should be accurate to at least two decimal places.
Let be an encryption function with possible keys , possible plaintexts, and possible ciphertexts. Assume that if you know the encryption key , then it is easy to find the decryption function (therefore, this problem does not apply to public key methods). Suppose that, for each pair of keys, it is possible to find a key such that for all plaintexts . Assume also that for every plaintext–ciphertext pair , there is usually only one key such that . Suppose that you know a plaintext–ciphertext pair . Give a birthday attack that usually finds the key in approximately steps. (Remark: This is much faster than brute force searching through all keys , which takes time proportional to .)
Show that the shift cipher (see Section 2.1) satisfies the conditions of part (a), and explain how to attack the shift cipher mod 26 using two lists of length 6. (Of course, you could also find the key by simply subtracting the plaintext from the ciphertext; therefore, the point of this part of the problem is to illustrate part (a).)
Alice uses double encryption with a block cipher to send messages to Bob, so gives the encryption. Eve obtains a plaintext–ciphertext pair and wants to find by the Birthday Attack. Suppose that the output of has bits. Eve computes two lists:
for randomly chosen keys
for randomly chosen keys .
Why is there a very good chance that Eve finds a key pair such that ?
Why is it unlikely that is the correct key pair? (Hint: Look at the analysis of the Meet-in-the-Middle Attack in Section 6.5.)
What is the difference between the Meet-in-the-Middle Attack and what Eve does in this problem?
Each person who has ever lived on earth receives a deck of 52 cards and thoroughly shuffles it. What is the probability that two people have the cards in the same order? It is estimated that around people have ever lived on earth. The number of shuffles of 52 cards is .
Let be a 300-digit prime. Alice chooses a secret integer and encrypts messages by the function .
Suppose Eve knows a cipher text and knows the prime . She captures Alice’s encryption machine and decides to try to find by a birthday attack. She makes two lists. The first list contains for some random choices of . Describe how to generate the second list, state approximately how long the two lists should be, and describe how Eve finds if her attack is successful.
Is this attack practical? Why or why not?
There are approximately primes with 150 digits. There are approximately particles in the universe. If each particle chooses a random 150-digit prime, do you think two particles will choose the same prime? Explain why or why not.
If there are five people in a room, what is the probability that no two of them have birthdays in the same month? (Assume that each person has probability 1/12 of being born in any given month.)
You use a random number generator to generate random 15-digit numbers. What is the probability that two of the numbers are equal? Your answer should be accurate enough to say whether it is likely or unlikely that two of the numbers are equal.
Nelson has a hash function that gives an output of 60 bits. Friends tell him that this is not a big enough output, so he takes a strong hash function with a 200-bit output and uses as his hash function. That is, he first hashes with his old hash function, then hashes the result with the strong hash function to get a 200-bit output, which he thinks is much better. The new hash function can be computed quickly. Does it have preimage resistance, and does it have strong collision resistance? Explain your answers. (Note: Assume that computers can do up to computations for this problem. Also, since it is essentially impossible to prove rigorously that most hash functions have preimage resistance or collision resistance, if your answer to either of these is “yes” then your explanation is really an explanation of why it is probably true.)
Bob signs contracts by signing the hash values of the contracts. He is using a hash function with a 50-bit output. Eve has a document that states that Bob will pay her a lot of money. Eve finds a file with documents that Bob has signed. Explain how Eve can forge Bob’s signature on a document (closely related to ) that states that Bob will pay Eve a lot of money. (Note: You may assume that Eve can do up to calculations.)
This problem derives the formula (12.1) for the probability of at least one match in a list of length when there are possible birthdays.
Let and . Show that and for .
Using the facts that and is decreasing and is increasing, show that
Show that if , then
(Hint: and .)
Let and assume that (this implies that ). Show that
with and .
Observe that when is large, is close to 1. Use this to show that as becomes large and is constant with , then we have the approximation
Suppose is a function with -bit outputs and with inputs much larger than bits (this implies that collisions must exist). We know that, with a birthday attack, we have probability 1/2 of finding a collision in approximately steps.
Suppose we repeat the birthday attack until we find a collision. Show that the expected number of repetitions is
(one way to evaluate the sum, call it , is to write ).
Assume that each evaluation of takes time a constant times . (This is realistic since the inputs needed to find collisions can be taken to have bits, for example.) Show that the expected time to find a collision for the function is a constant times .
Show that the expected time to produce the messages in Section 12.2 is a constant times .
Suppose we have an iterative hash function, as in Section 11.3, but suppose we adjust the function slightly at each iteration. For concreteness, assume that the algorithm proceeds as follows. There is a compression function that operates on inputs of a fixed length. There is also a function that yields outputs of a fixed length, and there is a fixed initial value . The message is padded to obtain the desired format, then the following steps are performed:
Split the message into blocks .
Let be the initial value .
For , let .
Let .
Show that the method of Section 12.2 can be used to produce multicollisions for this hash function.
Some of the steps of SRP are similar to the Diffie-Hellman key exchange. Why not use Diffie-Hellman to log in, using the following protocol? Alice and the server use Diffie-Hellman to establish a key . Or they could use a public key method to transmit a secret key from the server to Alice. Then they use , along with a symmetric system such as AES, to submit Alice’s password . Finally, the hash of the password is compared to what is stored in the computer’s password file.
Show how Eve can do an intruder-in-the-middle attack and obtain Alice’s password.
In order to avoid the attack in part (a), Alice and the server decide that Alice should send the hash of her password. Show that if Eve uses an intruder-in-the-middle attack, then she can log in to the server, pretending to be Alice.
Alice and the server have another idea. The server sends Alice a random and Alice sends to the server. Show how Eve can use an intruder-in-the-middle-attack to log in as Alice.
Suppose that in SRP, the number is chosen randomly by the server and sent to Alice at the same time that is sent. Suppose Eve has obtained from the server’s password file. Eve chooses a random , computes mod , and sends this value of to the server. Then Eve computes as mod . Show that these computations appear to be valid to the server, so Eve can log in as Alice.
If there are 30 people in a classroom, what is the probability that at least two have the same birthday? Compare this to the approximation given by formula (8.1).
How many people should there be in a classroom in order to have a 99% chance that at least two have the same birthday? (Hint: Use the approximation to obtain an approximate answer. Then use the product, for various numbers of people, until you find the exact answer.)
How many people should there be in a classroom in order to have 100% probability that at least two have the same birthday?
A professor posts the grades for a class using the last four digits of the Social Security number of each student. In a class of 200 students, what is the probability that at least two students have the same four digits?
For years, people have been using various types of signatures to associate their identities to documents. In the Middle Ages, a nobleman sealed a document with a wax imprint of his insignia. The assumption was that the noble was the only person able to reproduce the insignia. In modern transactions, credit card slips are signed. The salesperson is supposed to verify the signature by comparing with the signature on the card. With the development of electronic commerce and electronic documents, these methods no longer suffice.
For example, suppose you want to sign an electronic document. Why can’t you simply digitize your signature and append it to the document? Anyone who has access to it can simply remove the signature and add it to something else, for example, a check for a large amount of money. With classical signatures, this would require cutting the signature off the document, or photocopying it, and pasting it on the check. This would rarely pass for an acceptable signature. However, such an electronic forgery is quite easy and cannot be distinguished from the original.
Therefore, we require that digital signatures cannot be separated from the message and attached to another. That is, the signature is not only tied to the signer but also to the message that is being signed. Also, the digital signature needs to be easily verified by other parties. Digital signature schemes therefore consist of two distinct steps: the signing process, and the verification process.
In the following, we first present two signature schemes. We also discuss the important “birthday attacks” on signature schemes.
Note that we are not trying to encrypt the message . In fact, often the message is a legal document, and therefore should be kept public. However, if necessary, a signed message may be encrypted after it is signed. (This is done in PGP, for example. See Section 15.6.)
Bob has a document that Alice agrees to sign. They do the following:
Alice generates two large primes , , and computes . She chooses such that with and calculates such that . Alice publishes and keeps private , , .
Alice’s signature is
The pair is then made public.
Bob can then verify that Alice really signed the message by doing the following:
Download Alice’s .
Calculate . If , then Bob accepts the signature as valid; otherwise the signature is not valid.
Suppose Eve wants to attach Alice’s signature to another message . She cannot simply use the pair , since . Therefore, she needs with . This is the same problem as decrypting an RSA “ciphertext” to obtain the “plaintext” . This is believed to be hard to do.
Another possibility is that Eve chooses first, then lets the message be . It does not appear that Alice can deny having signed the message under the present scheme. However, it is very unlikely that will be a meaningful message. It will probably be a random sequence of characters, and not a message committing her to give Eve millions of dollars. Therefore, Alice’s claim that it has been forged will be believable.
There is a variation on this procedure that allows Alice to sign a document without knowing its contents. Suppose Bob has made an important discovery. He wants to record publicly what he has done (so he will have priority when it comes time to award Nobel prizes), but he does not want anyone else to know the details (so he can make a lot of money from his invention). Bob and Alice do the following. The message to be signed is .
Alice chooses an RSA modulus (, the product of two large primes), an encryption exponent , and decryption exponent . She makes and public while keeping private. In fact, she can erase from her computer’s memory at the end of the signing procedure.
Bob chooses a random integer with and computes . He sends to Alice.
Alice signs by computing . She returns to Bob.
Bob computes . This is the signed message .
Let’s show that is the signed message: Note that , since this is simply the encryption, then decryption, of in the RSA scheme. Therefore,
which is the signed message.
The choice of is random, so is the RSA encryption of a random number, and hence random. Therefore, gives essentially no information about (however, it would not hide a message such as ). In this way, Alice knows nothing about the message she is signing.
Once the signing procedure is finished, Bob has the same signed message as he would have obtained via the standard signing procedure.
There are several potential dangers with this protocol. For example, Bob could have Alice sign a promise to pay him a million dollars. Safeguards are needed to prevent such problems. We will not discuss these here.
Schemes such as these, called blind signatures, have been developed by David Chaum, who has several patents on them.
The ElGamal encryption method from Section 10.5 can be modified to give a signature scheme. One feature that is different from RSA is that, with the ElGamal method, there are many different signatures that are valid for a given message.
Suppose Alice wants to sign a message. To start, she chooses a large prime and a primitive root . Alice next chooses a secret integer such that and calculates . The values of , , and are made public. The security of the system will be in the fact that is kept private. It is difficult for an adversary to determine from since the discrete log problem is considered difficult.
In order for Alice to sign a message , she does the following:
Selects a secret random with such that .
Computes (with ).
Computes .
The signed message is the triple .
Bob can verify the signature as follows:
Download Alice’s public key .
Compute , and .
The signature is declared valid if and only if .
We now show that the verification procedure works. Assume the signature is valid. Since , we have , so . Therefore (recall that a congruence mod in the exponent yields an overall congruence mod ),
Suppose Eve discovers the value of . Then she can perform the signing procedure and produce Alice’s signature on any desired document. Therefore, it is very important that remain secret.
If Eve has another message , she cannot compute the corresponding since she doesn’t know . Suppose she tries to bypass this step by choosing an that satisfies the verification equation. This means she needs to satisfy
This can be rearranged to , which is a discrete logarithm problem. Therefore, it should be hard to find an appropriate . If is chosen first, the equation for is similar to a discrete log problem, but more complicated. It is generally assumed that it is also difficult to solve. It is not known whether there is a way to choose and simultaneously, though this seems to be unlikely. Therefore, the signature scheme appears to be secure, as long as discrete logs mod are difficult to compute (for example, should not be a product of small primes; see Section 10.2).
Suppose Alice wants to sign a second document. She must choose a new random value of . Suppose instead that she uses the same for messages and . Then the same value of is used in both signatures, so Eve will see that has been used twice. The values are different; call them and . Eve knows that
Therefore,
Let . There are solutions to the congruence, and they can be found by the procedure given in Subsection 3.3.1. Usually is small, so there are not very many possible values of . Eve computes for each possible until she gets the value . She now knows . Eve now solves
for . There are possibilities. Eve computes for each one until she obtains , at which point she has found . She now has completely broken the system and can reproduce Alice’s signatures at will.
Alice wants to sign the message (which corresponds to one, if we let ). She chooses . Then is a primitive root. She has a secret number . She computes . To sign the message, she chooses a random number and keeps it secret. She computes . Then she computes
The signed message is the triple .
Now suppose Alice also signs the message (which is two) and produces the signed message . Immediately, Eve recognizes that Alice used the same value of , since the value of is the same in both signatures. She therefore writes the congruence
Since , there are two solutions, which can be found by the method described in Subsection 3.3.1. Divide the congruence by 2:
This has the solution , so there are two values of , namely 239 and . Calculate
Since the first is the correct value of , Eve concludes that . She now rewrites to obtain
Since , there are two solutions, namely and , which can be found by the method of Subsection 3.3.1. Eve computes
Since the second value is , she has found that .
Now that Eve knows , she can forge Alice’s signature on any document.
The ElGamal signature scheme is an example of a signature with appendix. The message is not easily recovered from the signature . The message must be included in the verification procedure. This is in contrast to the RSA signature scheme, which is a message recovery scheme. In this case, the message is readily obtained from the signature . Therefore, only needs to be sent since anyone can deduce as . It is unlikely that a random will yield a meaningful message , so there is little danger that someone can successfully replace a valid message with a forged message by changing .
In the two signature schemes just discussed, the signature can be longer than the message. This is a disadvantage when the message is long. To remedy the situation, a hash function is used. The signature scheme is then applied to the hash of the message, rather than to the message itself.
The hash function is made public. Starting with a message , Alice calculates the hash . This output is significantly smaller, and hence signing the hash may be done more quickly than signing the entire message. Alice calculates the signed message for the hash function and uses it as the signature of the message. The pair now conveys basically the same knowledge as the original signature scheme did. It has the advantages that it is faster to create (under the reasonable assumption that the hash operation is quick) and requires less resources for transmission or storage.
Is this method secure? Suppose Eve has possession of Alice’s signed message . She has another message to which she wants to add Alice’s signature. This means that she needs = ; in particular, she needs . If the hash function is one-way, Eve will find it hard to find any such . The chance that her desired will work is very small. Moreover, since we require our hash function to be strongly collision-resistant, it is unlikely that Eve can find two messages with the same signatures. Of course, if she did, she could have Alice sign , then transfer her signature to . But Alice would get suspicious since (and ) would very likely be meaningless messages.
In the next section, however, we’ll see how Eve can trick Alice if the size of the message digest is too small (and we’ll see that the hash function will not be strongly collision-resistant, either).
Alice is going to sign a document electronically by using one of the signature schemes to sign the hash of the document. Suppose the hash function produces an output of 50 bits. She is worried that Fred will try to trick her into signing an additional contract, perhaps for swamp land in Florida, but she feels safe because the chance of a fraudulent contract having the same hash as the correct document is 1 out of , which is approximately 1 out of . Fred can try several fraudulent contracts, but it is very unlikely that he can find one that has the right hash. Fred, however, has studied the birthday attack and does the following. He finds 30 places where he can make a slight change in the document: adding a space at the end of a line, changing a wording slightly, etc. At each place, he has two choices: Make the small change or leave the original. Therefore, he can produce documents that are essentially identical with the original. Surely, Alice will not object to any of these versions. Now, Fred computes the hash of each of the versions and stores them. Similarly, he makes versions of the fraudulent contract and stores their hashes. Consider the generalized birthday problem with and . The probability of a match is approximately
Therefore, it is very likely that a version of the good document has the same hash as a version of the fraudulent contract. Fred finds the match and asks Alice to sign the good version. He plans to append her signature to the fraudulent contract, too. Since they have the same hash, the signature would be valid for the fraudulent one, so Fred could claim that Alice agreed to buy the swamp land.
But Alice is an English teacher and insists on removing a comma from one sentence. Then she signs the document, which has a completely different hash from the document Fred asked her to sign. Fred is foiled again. He now is faced with the prospect of trying to find a fraudulent contract that has the same hash as this new version of the good document. This is essentially impossible.
What Fred did is called the birthday attack. Its practical implication is that you should probably use a hash function with output twice as long as what you believe to be necessary, since the birthday attack effectively halves the number of bits. What Alice did is a recommended way to foil the birthday attack on signature schemes. Before signing an electronic document, make a slight change.
The National Institute of Standards and Technology proposed the Digital Signature Algorithm (DSA) in 1991 and adopted it as a standard in 1994. Later versions increased the sizes of the parameters. Just like the ElGamal method, DSA is a digital signature scheme with appendix. Also, like other schemes, it is usually a message digest that is signed. In this case, let’s say the hash function produces a 256-bit output. We will assume in the following that our data message has already been hashed. Therefore, we are trying to sign a 256-bit message.
The generation of keys for DSA proceeds as follows. First, there is an initialization phase:
Alice finds a prime that is 256 bits long and chooses a prime that satisfies (see Exercise 15). The discrete log problem should be hard for this choice of . (In the initial version, had 512 bits. Later versions of the standard require longer primes, for example, 2048 bits.)
Let be a primitive root mod and let . Then .
Alice chooses a secret such that and calculates .
Alice publishes and keeps secret.
Alice signs a message by the following procedure:
Select a random, secret integer such that .
Compute .
Compute .
Alice’s signature for is , which she sends to Bob along with .
For Bob to verify, he must
Download Alice’s public information .
Compute , and .
Compute .
Accept the signature if and only if .
We show that the verification works. By the definition of , we have
which implies
Therefore,
So . Thus .
As in the ElGamal scheme, the integer must be kept secret. Anyone who has knowledge of can sign any desired document. Also, if the same value of is used twice, it is possible to find by the same procedure as before.
In contrast to the ElGamal scheme, the integer does not carry full information about . Knowing allows us to find only the mod value. There are approximately numbers mod that reduce to a given number mod .
What is the advantage of having rather than using a primitive root? Recall the Pohlig-Hellman attack for solving the discrete log problem. It could find information mod small prime factors of , but it was useless mod large prime factors such as . In the ElGamal scheme, an attacker could determine , where is the largest power of dividing . This would not come close to finding , but the general philosophy is that many little pieces of information collectively can often be useful. The DSA avoids this problem by removing all but the mod information for .
In the ElGamal scheme, three modular exponentiations are needed in the verification step. This step is modified for the DSA so that only two modular exponentiations are needed. Since modular exponentiation is one of the slower parts of the computation, this change speeds up the verification, which can be important if many signatures need to be verified in a short time.
Show that if someone discovers the value of used in the ElGamal signature scheme, then can also be determined if is small.
Alice signs the hash of a message. Suppose her hash function satisfies and for all . Suppose is a valid signed message from Alice. Give another message for which the same signature is also valid.
Alice says that she is willing to sign a petition to save koala bears. Alice’s signing algorithm uses a hash function that has an output of 60 bits (and she signs the hash of the document). Describe how Eve can trick Alice into signing a statement allowing Eve unlimited withdrawals from Alice’s bank account.
Alice uses RSA signatures (without a hash function).
Eve wants to produce Alice’s signature on the document . Why is this difficult? Explain this by saying what difficult cryptographic problem must be solved. (Do not say that it’s because Eve does not know the decryption exponent . Why isn’t there another way to produce the signature?)
Since part (a) is too hard for Eve, she decides to produce Alice’s valid RSA signature on a document so that the signature is (= Alice, when , , etc.). How does Eve find an appropriate message?
Nelson thinks he has a new version of the signature scheme. He chooses RSA parameters , , and . He signs by computing . The verification equation is .
Show that if Nelson correctly follows the signing procedure, or if he doesn’t, then the signature is declared valid.
Show that Eve can forge Nelson’s signature on any document , even though she does not know .
(The point of this exercise is that the verification equation is important. All Eve needs to do is satisfy the verification equation. She does not need to follow the prescribed procedure for producing the signature.)
Alice has a long message . She breaks into blocks of 256 bits: . She regards each block as a number between 0 and , and she signs the sum . This means her signed message is , where sig is the signing function. Is this a good idea? Why or why not?
Suppose that is a message signed with the ElGamal signature scheme. Choose with and let . Let .
Find a message for which is a valid signature.
This method allows Eve to forge a signature on the message . Why is it unlikely that this causes problems?
Let , , , and . Show that . This shows that the order of operations in the DSA is important.
There are many variations to the ElGamal digital signature scheme that can be obtained by altering the signing equation . Here are some variations.
Consider the signing equation . Show that the verification is a valid verification procedure.
Consider the signing equation . Show that the verification is a valid verification procedure.
Consider the signing equation . Show that the verification is a valid verification procedure.
Consider the following variant of the ElGamal Signature Scheme: Alice chooses a large prime , a primitive root , and a secret integer . She computes . The numbers are made public and is kept secret. If , Alice signs as follows: She chooses a random integer and computes , with , and . The signed message is . Bob verifies the signature by checking that . If , she breaks into blocks and signs each block.
Show that if Alice signs correctly then the verification congruence is satisfied.
Suppose Eve has a document and she wants to forge Alice’s signature on . That is, she wants to find and such that is valid. Eve chooses and tries to find a suitable . Why will it probably be hard to find ?
Suppose Alice has a very long message and wants to decrease the size of the signature. How can she use a hash function to do this? Explicitly give the modifications of the above equations that must be done to accomplish this.
The ElGamal signature scheme presented is weak to a type of attack known as existential forgery. Here is the basic existential forgery attack. Choose such that . Compute and .
Prove the claim that the pair is a valid signature for the message (of course, it is likely that is not a meaningful message).
Suppose a hash function is used and the signature must be valid for instead of for (so we need to have ). Explain how this scheme protects against existential forgery. That is, explain why it is hard to produce a forged, signed message by this procedure.
Alice’s RSA public key is and her private key is . Recall that a document with an RSA signature is valid if . Bob wants Alice to sign a document but he does not want Alice to read the document. Assume . They do the following:
Bob chooses a random integer with . He computes .
Alice signs by computing .
Bob divides by mod to obtain .
Show that is valid.
Why is it assumed that ?
Alice wants to sign a document using the ElGamal signature scheme. Suppose her random number generator is broken, so she uses in the signature scheme. How will Eve notice this and how can Eve determine the values of and (and thus break the system)?
Suppose Alice signs contracts using a 30-bit hash function (and is known to everyone). If is the contract, then is the signed contract (where sig is some public signature function). Eve has a file of fraudulent contracts. She finds a file with contracts with valid signatures (by Alice) on them. Describe how Eve can accomplish her goal of putting Alice’s signature on at least one fraudulent document.
In several cryptographic protocols, one needs to choose a prime such that is also prime. One way to do this is to choose a prime at random and then test for primality. Suppose is chosen to have approximately 300 decimal digits. Assume is a random odd integer of 300 digits. (This is not quite accurate, since cannot be congruent to 1 mod 3, for example. But the assumption is good enough for a rough estimate.) Show that the probability that is prime is approximately (use the prime number theorem, as in Section 9.3). This means that with approximately 345 random choices for the prime , you should be able to find a suitable prime .
In a version of the Digital Signature Algorithm, Alice needs a 256-bit prime and a 2048-bit prime such that . Suppose Alice chooses a random 256-bit prime and a random 1792-bit even number such that has 2048 bits. Show that the probability that is prime is approximately 1/710. This means that Alice can find a suitable and fairly quickly.
Consider the following variation of the ElGamal signature scheme. Alice chooses a large prime and a primitive root . She also chooses a function that, given an integer with , returns an integer with . (For example, for is one such function.) She chooses a secret integer and computes . The numbers and the function are made public.
Alice wants to sign a message :
Alice chooses a random integer with .
She computes .
She computes .
The signed message is .
Bob verifies the signature as follows:
He computes .
He computes .
If , he declares the signature to be valid.
Show that if all procedures are followed correctly, then the verification equation is true.
Suppose Alice is lazy and chooses the constant function satisfying for all . Show that Eve can forge a valid signature on every message (for example, give a value of and of and that will give a valid signature for the message ).
Alice wants to sign a long message that she has broken into blocks . She knows that signing each block individually is wasteful, so she computes and signs . Her signed message is . Suppose Eve has a message that Alice signed. How can Eve put Alice’s signature on fraudulent messages?
In some scenarios, it is necessary to have a digital document signed by multiple participants. For example, a contract issued by a company might need to be signed by both the Issuer and the Supervisor before it is valid. To accomplish this, a trusted entity chooses to be the product of two large, distinct primes and chooses integers with . The pair is given to the Issuer, the pair is given to the Supervisor, and the pair is made public.
Under the assumption that RSA is hard to decrypt, why should it be difficult for someone who knows at most one of to produce such that ?
Devise a procedure where the Issuer signs the contract first and gives the signed contract to the Supervisor, who then signs it in such a way that anyone can verify that the document was signed by both the Issuer and the Supervisor. Use part (a) to show why the verification convinces someone that both parties signed the contract.
Suppose we use the ElGamal signature scheme with , , . We send two signed messages :
Show that the same value of was used for each signature.
Use this fact to find this value of and to find the value of such that .
Alice and Bob have the following RSA parameters:
Bob knows that
(where ). Alice signs a document and sends the document and signature (where ) to Bob. To keep the contents of the document secret, she encrypts using Bob’s public key. Bob receives the encrypted signature pair , where
Find the message and verify that it came from Alice. (The numbers are stored as sigpairm1, sigpairs1, signa, signb, sigpb, sigqb in the downloadable computer files ( bit.ly/2JbcS6p).)
In problem 2, suppose that Bob had primes and . Assuming the same encryption exponents, explain why Bob is unable to verify Alice’s signature when she sends him the pair with
What modifications need to be made for the procedure to work? (The numbers and are stored as sigpairm2, sigpairs2 in the downloadable computer files (bit.ly/2JbcS6p).)
The mathematics behind cryptosystems is only part of the picture. Implementation is also very important. The best systems, when used incorrectly, can lead to serious problems. And certain design considerations that might look good at the time can later turn out to be bad.
We start with a whimsical example that (we hope) has never been used in practice. Alice wants to send a message to Bob over public channels. They decide to use the three-pass protocol (see Section 3.6).
Here is a physical description. Alice puts her message in a box, puts on her lock, and send the box to Bob. Bob puts on his lock and sends the box back to Alice, who takes her lock off the box and sends the box to Bob. He takes off his lock and opens the box to retrieve the message. Notice that the box always is locked when it is in transit between Alice and Bob.
In Section 3.6, we gave an implementation using modular exponentiation. But Alice and Bob know that the one-time pad is the most secure cryptosystem, and they want to use only the best. Therefore, Alice and Bob each choose their own one-time pads, call them and . Alice’s message is . She encrypts it as and sends to Bob, who computes and sends back to Alice. Then Alice removes her “lock” by computing and sends to Bob. He removes his lock by computing .
Meanwhile, Eve intercepts and computes
The moral of the story: even the best cryptosystem can be insecure if not used appropriately.
In this chapter, we describe some situations where mistakes were made, leading to security flaws.
For its time, the Enigma machine (see Section 2.7) was an excellent system, and the German troops’ belief in this led to security breaches. After all, if your post is being overrun or your ship is sinking, are you going to try to escape, or are you going to risk your life to destroy a crypto machine that you’ve been told is so good that no one can break the system?
But there were also other lapses in its use. One of the most famous took advantage of an inherent “feature” of Enigma. The design, in particular the reflector (see Section 2.7), meant that a letter could not be encrypted as itself. So an would never encrypt as an , and a would never encrypt as a , etc. This consequence of the internal wiring probably looked great to people unfamiliar with cryptography. After all, there would always be “complete encryption." The plaintext would always be completely changed. But in practice it meant that certain guesses for the plaintext could immediately be discarded as impossible.
One day in 1941, British cryptographers intercepted an Enigma message sent by the Italian navy, and Mavis Batey, who worked at Bletchley Park, noticed that the ciphertext did not contain the letter . Knowing the feature of Enigma, she guessed that the original message was simply many repetitions of (maybe a bored radio operator was tapping his finger on the keyboard while waiting for the next message). Using this guess, it was possible to determine the day’s Enigma key. One message was “Today’s the day minus three.” This alerted the British to a surprise attack on their Mediterranean fleet by the Italian navy. The resulting Battle of Cape Matapan established the dominance of the British fleet in the eastern Mediterranean.
The security of the RSA cryptosystem (see Chapter 9) relies heavily on the inability of an attacker to factor the modulus . Therefore, it is very important that and be chosen in an unpredictable way. This usually requires a good pseudorandom number generator to give a starting point in a search for each prime, and a pseudorandom number generator needs a random seed, or some random data, as an input to start its computation.
In 1995, Ian Goldberg and David Wagner, two computer science graduate students at the University of California at Berkeley, were reading the documentation for the Netscape web browser, one of the early Internet browsers. When a secure transaction was needed, the browser generated an RSA key. They discovered that a time stamp was used to form the seed for the pseudorandom number generator that chose the RSA primes. Using this information, they were able to deduce the primes used for a transaction in just a few seconds on a desktop computer.
Needless to say, Netscape quickly put out a new version that repaired this flaw. As Jeff Bidzos, president of RSA Data Security pointed out, Netscape “declined to have us help them on the implementation of our software in their first version. But this time around, they’ve asked for our help” (NY Times, Sept. 19, 1995).
Another potential implementation flaw was averted several years ago. A mathematician at a large electronics company (which we’ll leave nameless) was talking with a colleague, who mentioned that they were about to put out an implementation of RSA where they saved time by choosing only one random starting point to look for primes, and then having and be the next two primes larger than the start. In horror, the mathematician pointed out that finding the next prime after , a very straightforward process, immediately yields the larger prime, thus breaking the system.
Perhaps the most worrisome problem was discovered in 2012. Two independent teams collected RSA moduli from the web, for example from X-509 certificates, and computed the gcd of each pair (see [Lenstra2012 et al.] and [Heninger et al.]). They found many cases where the gcd gave a nontrivial factor. For example, the team led by Arjen Lenstra collected RSA moduli. They computed the gcd of each pair of moduli and found 26965 moduli where the gcd gave a nontrivial factor. Fortunately, most of these moduli were at the time no longer in use, but this is still a serious security problem. Unless the team told you, there is no way to know whether your modulus is one of the bad ones unless you want to compute the gcd of your modulus with the other moduli (this is, of course, much faster than computing the gcd of all pairs, not just those that include your modulus). And if you find that your modulus is bad, you have factored someone else’s modulus and thus have broken their system.
How could this have happened? Probably some bad pseudorandom number generators were used. Let’s suppose that your pseudorandom number generator can produce only 1 million different primes (this could easily be the case if you don’t have a good source of random seeds; [Heninger et al.] gives a detailed analysis of this situation). You use it to generate 2000 primes, which you pair into 1000 RSA moduli. So what could go wrong?
Recall the Birthday Paradox (see Section 12.1). The number of “birthdays” is and the number of “people” is . It is likely that two “people" have the same “birthday.” That is, two of the primes are equal (it is very unlikely that they are used for the same modulus, especially if the generating program is competently written). Therefore, two moduli will share a common prime, and this can be discovered by computing their gcd.
Of course, there are enough large primes that a good pseudorandom number generator will potentially produce so many primes that is is unlikely that a prime will be repeated. As often is the case in cryptography, we see that security ultimately relies on the quality of the pseudorandom number generator. Generating randomness is hard.
As wireless technology started to replace direct connections in the 1990s, the WEP (Wired Equivalent Privacy) Algorithm was introduced and was the standard method used from 1997 to 2004 for users to access the Internet via wireless devices via routers. The intent was to make wireless communication as secure as that of a wired connection. However, as we’ll see, there were several design flaws, and starting in 2004 it was replaced by the more secure WPA (Wi-Fi Protected Access), WPA2, and WPA3.
The basic arrangement has a central access point, usually a router, and several laptop computers that want to access the Internet through the router. The idea is to make the communications between the laptops and the router secure. Each laptop has the same WEP key as the other users for this router.
A laptop initiates communication with the router. Here is the protocol.
The laptop sends an initial greeting to the router.
Upon receiving the greeting, the router generates a random 24-bit IV (= initial value) and sends it to the laptop.
The laptop produces a key for RC4 (a stream cipher described in Section 5.3) by concatenating the WEP key for this router with the IV received from the router:
and uses it to produce the bitstream RC4.
The laptop also computes the checksum . (See the description of CRC-32 at the end of this section.)
The laptop forms the ciphertext
which it sends to the router along with the IV (so the router does not need to remember what IV it sent).
The router uses the WEPKey and the IV to form
and then uses this to produce the bitstream .
The router computes .
If , the message is regarded as authentic and is sent to the Internet. If not, it is rejected.
Notice that, except for the authentication step, this is essentially a one-time pad, with RC4 supplying the pseudorandom bitstream.
Usually, the WEP key length was 40 bits. Why so short? This meant there were possible keys, so a brute force attack could find the key. But back when the algorithm was developed, U.S. export regulations prohibited the export of more secure cryptography, so the purpose of 40 bits was to allow international use of WEP. Later, many applications changed to 104 bits. However, it should be emphasized that WEP is considered insecure for any key size because of the full collection of design flaws in the protocol, which is why the Wi-Fi Alliance subsequently developed new security protocols to protect wireless communications.
The designers of WEP knew that reusing a one-time pad is insecure, which is why they included the 24-bit IV. Since the IV is concatenated with the 40-bit WEP key to form the 64-bit RC4Key (or bits in more recent versions), there should be around keys used to generate RC4 bitstreams. This might seem secure, but a highly used router might be expected to use almost every possible key in a short time span.
But the situation is even worse! The Birthday Paradox (see Section 12.1) enters again. There are possible IV values, and . Therefore, after, for example, 10000 communications, it is very likely that some IV will have been used twice. Since the IV is sent unencrypted from the router to the laptop, this is easy for Eve to notice. There are now two messages encrypted with the same pseudorandom one-time pad. This allows Eve to recover these two messages (see Section 4.3). But Eve also obtains the RC4 bitstream used for encryption. The IV and this bitstream are all that Eve needs in Step (e) of the protocol to do the encryption. The WEP key is not required once the bitstream corresponding to the IV is known. Therefore, Eve has gained access to the router and can send messages as desired.
There is even more damage that Eve can do. She intercepts a ciphertext , and we are not assuming this RC4 was obtained from a repeated IV value. So the is rather securely protecting the message. But suppose Eve guesses correctly that the message says
Eve’s IP address is 172.16.254.1. Let be the message consisting of all 0s except for the XOR of the two IP addresses in the appropriate locations. Then
Moreover, because of the nature of CRC-32,
Eve doesn’t know , and therefore, doesn’t know . But she does know . Therefore, she takes the original and forms
Since and
she has formed
This will be accepted by the router as an authentic message. The router forwards it to the Internet and it is delivered to Eve’s IP address. Eve reads the message and obtains the credit card number.
This last flaw easily could have been avoided if a cryptographic hash function had been used in place of CRC-32. Such functions are highly non-linear (so it is unlikely that , and it would be very hard to figure out how to modify the checksum so that the message is deemed authentic.
The original version of WEP used what is known as Shared Key Authentication to control access. In this version, after the laptop initiates the communication with a greeting, the router sends a random challenge bitstring to the laptop. The laptop encrypts this using the WEP key as the key for RC4 and then XORing the challenge with the RC4 bitstring:
This is sent to the router, which can do the same computation and compare results. If they agree, access is granted. If not, the laptop is rejected.
But Eve sees the Challenge and also RC4(WEPKey) Challenge. A quick XOR of these two strings yields RC4(WEPKey). Now Eve can respond to any challenge, even if she does not know the WEP key, and thereby gain access to the router.
This access method was soon dropped and replaced with the open access system described above, where anyone can try to send a message to the system, but only the ones that decrypt and yield a valid checksum are accepted.
CRC-32 ( 32-bit Cyclic Redundancy Check) was developed as a way to detect bit errors in data. The message is written in binary and these bits are regarded as the coefficients of a polynomial mod 2. For example, the message 10010101 becomes the polynomial . Divide this polynomial by the polynomial
Let be the remainder, which is a polynomial of degree at most 31. Reduce the coefficients of mod 2 to obtain a binary string of length 32 (if the degree is less than 31, the binary string will start with some 0s corresponding to the 0-coefficients for the high degree terms). For an example of such polynomial division, see Section 3.11. This binary string is the checksum .
Adding two polynomials mod 2 corresponds to computing the XOR of the corresponding bitstrings formed from their coefficients. For example, the polynomial corresponds to 10010101 and corresponds to 00010011, with a few 0s added to match the first string. The sum of the two polynomials mod 2 is , which corresponds to 10000110 = 10010101⊕00010011. Since dividing the sum of two polynomials by yields a remainder that is the sum of the remainders that would be obtained by dividing each polynomial individually, it can be deduced that
for any messages .
The Prime Supply Company produces RSA keys. It has a database of primes with 200 decimal digits and a database of primes with 250 decimal digits. Whenever it is asked for an RSA modulus, it randomly chooses a 200-digit prime and a 250-digit prime from its databases, and then computes . It advertises that it has possible moduli to distribute and brags about its great level of security. After the Prime Supply Company has used this method to supply RSA moduli to 20000 customers, Eve tries computing gcd’s of pairs of moduli of these clients (that is, for each pair of clients, with RSA moduli and , she computes ). What is the likely outcome of Eve’s computations? Explain.
The Modulus Supply Company sells RSA moduli. To save money, it has one 300-digit prime and, for each customer, it randomly chooses another 300-digit prime (different from and different from supplied to other customers). Then it sells , along with encryption and decryption exponents, to unsuspecting customers.
Suppose Eve suspects that the company is using this method of providing moduli to customers. How can she read their messages? (As usual, the modulus and the encryption exponent for each customer are public information.)
Now suppose that the customers complain that Eve is reading their messages. The company computes a set of primes, each with 300 digits. For each customer, it chooses a random prime from , then randomly chooses a 300-digit prime , as in part (a). The 100 customers who receive moduli from this update are happy and Eve publicly complains that she no longer can break their systems. As a result, 2000 more customers buy moduli from the company. Explain why the 100 customers probably have distinct primes , but among the 2000 customers there are probably two with the same .
Suppose is used in place of the used in CRC-32. If is a binary string, then express it as a polynomial , divide by , and let be the remainder mod 2. Change this back to a binary string of length 4 and call this string . This produces a check-sum in a manner similar to CRC-32.
Compute (1001001) and (1101100).
Compute (1001001 ⊕ 1101100) and show that it is the XOR of the two individual XORs obtained in part (a).
You intercept the ciphertext TUCDZARQKERUIZCU, which was encrypted using an Enigma machine. You know the plaintext was either ATTACKONTHURSDAY or ATTACKONSATURDAY. Which is it?
You play the CI Game from Chapter 4 with Bob. You give him the plaintexts CAT and DOG. He chooses one of these at random and encrypts it on his Enigma machine, producing the ciphertext DDH. Can you decide which plaintext he encrypted?
Suppose you play the CI Game in part (a) using plaintexts of 100 letters (let’s assume that these plaintexts agree in only 10 letters and differ in the other 90 letters). Explain why you should win almost every time. What does this say about ciphertext indistinguishability for Enigma?
Up to this point, we have covered many basic cryptographic tools, ranging from encryption algorithms to hash algorithms to digital signatures. A natural question arises: Can we just apply these tools directly to make computers and communications secure?
At first glance, one might think that public key methods are the panacea for all of security. They allow two parties who have never met to exchange messages securely. They also provide an easy way to authenticate the origin of a message and, when combined with hash functions, these signature operations can be made efficient.
Unfortunately, the answer is definitely no and there are many problems that still remain. In discussing public key algorithms, we never really discussed how the public keys are distributed. We have casually said that Alice will announce her public key for Bob to use. Bob, however, should not be too naive in just believing what he hears. How does he know that it is actually Alice that he is communicating with? Perhaps Alice’s evil friend, Mallory, is pretending to be Alice but is actually announcing Mallory’s public key instead. Similarly, when you access a website to make a purchase, how do you know that your transaction is really with a legitimate merchant and that no one has set up a false organization? The real challenge in these problems is the issue of authentication, and Bob should always confirm that he is communicating with Alice before sending any important information.
Combining different cryptographic tools to provide security is much trickier than grabbing algorithms off of the shelf. Instead, security protocols involving the exchange of messages between different entities must be carefully thought out in order to prevent clever attacks. This chapter focuses on such security protocols.
If you receive an email asking you to go to a website and update your account information, how can you be sure that the website is legitimate? An impostor can easily set up a web page that looks like the correct one but simply records sensitive information and forwards it to Eve. This is an important authentication problem that must be addressed in real-world implementations of cryptographic protocols. One standard solution uses certificates and a trusted authority and will be discussed in Section 15.5. Authentication will also play an important role in the protocols in many other sections of this chapter.
Another major consideration that must be addressed in communications over public channels is the intruder-in-the-middle attack, which we’ll discuss shortly. It is another cause for several of the steps in the protocols we discuss.
Eve, who has recently learned the difference between a knight and a rook, claims that she can play two chess grandmasters simultaneously and either win one game or draw both games. The strategy is simple. She waits for the first grandmaster to move, then makes the identical move against the second grandmaster. When the second grandmaster responds, Eve makes that play against the first grandmaster. Continuing in this way, Eve cannot lose both games (unless she runs into time trouble because of the slight delay in transferring the moves).
A similar strategy, called the intruder-in-the-middle attack, can be used against many cryptographic protocols. Many of the technicalities of the algorithms in this chapter are caused by efforts to thwart such an attack.
Let’s see how this attack works against the Diffie-Hellman key exchange from Section 10.4.
Let’s recall the protocol. Alice and Bob want to establish a key for communicating. The Diffie-Hellman scheme for accomplishing this is as follows:
Either Alice or Bob selects a large, secure prime number and a primitive root . Both and can be made public.
Alice chooses a secret random with , and Bob selects a secret random with .
Alice sends to Bob, and Bob sends to Alice.
Using the messages that they each have received, they can each calculate the session key . Alice calculates by , and Bob calculates by .
Here is how the intruder-in-the-middle attack works.
Eve chooses an exponent .
Eve intercepts and .
Eve sends to Alice and to Bob (Alice believes she is receiving and Bob believes he is receiving ).
Eve computes and . Alice, not realizing that Eve is in the middle, also computes , and Bob computes .
When Alice sends a message to Bob, encrypted with , Eve intercepts it, deciphers it, encrypts it with , and sends it to Bob. Bob decrypts with and obtains the message. Bob has no reason to believe the communication was insecure. Meanwhile, Eve is reading the juicy gossip that she has obtained.
To avoid the intruder-in-the-middle attack, it is desirable to have a procedure that authenticates Alice’s and Bob’s identities to each other while the key is being formed. A protocol that can do this is known as an authenticated key agreement protocol.
A standard way to stop the intruder-in-the-middle attack is the station-to-station (STS) protocol, which uses digital signatures. Each user has a digital signature function with verification algorithm . For example, could produce an RSA or ElGamal signature, and checks that it is a valid signature for . The verification algorithms are compiled and made public by the trusted authority Trent, who certifies that is actually the verification algorithm for and not for Eve.
Suppose now that Alice and Bob want to establish a key to use in an encryption function . They proceed as in the Diffie-Hellman key exchange, but with the added feature of digital signatures:
They choose a large prime and a primitive root .
Alice chooses a random and Bob chooses a random .
Alice computes , and Bob computes .
Alice sends to Bob.
Bob computes .
Bob sends and to Alice.
Alice computes .
Alice decrypts to obtain .
Alice asks Trent to verify that is Bob’s verification algorithm.
Alice uses to verify Bob’s signature.
Alice sends to Bob.
Bob decrypts, asks Trent to verify that is Alice’s verification algorithm, and then uses to verify Alice’s signature.
This protocol is due to Diffie, van Oorschot, and Wiener. Note that Alice and Bob are also certain that they are using the same key , since it is very unlikely that an incorrect key would give a decryption that is a valid signature.
Note the role that trust plays in the protocol. Alice and Bob must trust Trent’s verification if they are to have confidence that their communications are secure. Throughout this chapter, a trusted authority such as Trent will be an important participant in many protocols.
So far in this book we have discussed various cryptographic concepts and focused on developing algorithms for secure communication. But a cryptographic algorithm is only as strong as the security of its keys. If Alice were to announce to the whole world her key before starting an AES session with Bob, then anyone could eavesdrop. Such a scenario is absurd, of course. But it represents an extreme version of a very important issue: If Alice and Bob are unable to meet in order to exchange their keys, can they still decide on a key without compromising future communication?
In particular, there is the fundamental problem of sharing secret information for the establishment of keys for symmetric cryptography. By symmetric cryptography, we mean a system such as AES where both the sender and the recipient use the same key. This is in contrast to public key methods such as RSA, where the sender has one key (the encryption exponent) and the receiver has another (the decryption exponent).
In key establishment protocols, there is a sequence of steps that take place between Alice and Bob so that they can share some secret information needed in the establishment of a key. Since public key cryptography methods employ public encryption keys that are stored in public databases, one might think that public key cryptography provides an easy solution to this problem. This is partially true. The main downside to public key cryptography is that even the best public key cryptosystems are computationally slow when compared with the best symmetric key methods. RSA, for example, requires exponentiation, which is not as fast as the mixing of bits that takes place in AES. Therefore, sometimes RSA is used to transmit an AES key that will then be used for transmitting vast amounts of data. However, a central server that needs to communicate with many clients in short time intervals sometimes needs key establishment methods that are faster than current versions of public key algorithms. Therefore, in this and in various other situations, we need to consider other means for the exchange and establishment of keys for symmetric encryption algorithms.
There are two basic types of key establishment. In key agreement protocols, neither party knows the key in advance; it is determined as a result of their interaction. In key distribution protocols, one party has decided on a key and transmits it to the other party.
Diffie-Hellman key exchange (see Sections 10.4 and 15.1) is an example of key agreement. Using RSA to transmit an AES key is an example of key distribution.
In any key establishment protocol, authentication and intruder-in-the-middle attacks are security concerns. Pre-distribution, which will be discussed shortly, is one solution. Another solution involves employing a server that will handle the task of securely giving keys to two entities wishing to communicate. We will also look at some other basic protocols for key distribution using a third party. Solutions that are more practical for Internet communications are treated in later sections of this chapter.
In the simplest version of this protocol, if Alice wants to communicate with Bob, the keys or key schedules (lists describing which keys to use at which times) are decided upon in advance and somehow this information is sent securely from one to the other. For example, this method was used by the German navy in World War II. However, the British were able to use codebooks from captured ships to find daily keys and thus read messages.
There are some obvious limitations and drawbacks to pre-distribution. First, it requires two parties, Alice and Bob, to have met or to have established a secure channel between them in the first place. Second, once Alice and Bob have met and exchanged information, there is nothing they can do, other than meeting again, to change the key information in case it gets compromised. The keys are predetermined and there is no easy method to change the key after a certain amount of time. When using the same key for long periods of time, one runs a risk that the key will become compromised. The more data that are transmitted, the more data there are with which to build statistical attacks.
Here is a general and slightly modified situation. First, we require a trusted authority whom we call Trent. For every pair of users, call them , Trent produces a random key that will be used as a key for a symmetric encryption method (hence ). It is assumed that Trent is powerful and has established a secure channel to each of the users. He distributes all the keys that he has determined to his users. Thus, if Trent is responsible for users, each user will be receiving keys to store, and Trent must send keys securely. If is large, this could be a problem. The storage that each user requires is also a problem.
One method for reducing the amount of information that must be sent from the trusted authority is the Blom key pre-distribution scheme. Start with a network of users, and let be a large prime, where . Everyone has knowledge of the prime . The protocol is now the following:
Each user in the network is assigned a distinct public number .
Trent chooses three secret random numbers and .
For each user , Trent calculates the numbers
and sends them via his secure channel to .
Each user forms the linear polynomial
If Alice (A) wants to communicate with Bob (B), then Alice computes , while Bob computes .
It can be shown that (Exercise 2). Alice and Bob communicate via a symmetric encryption system, for example, AES, using the key (or a key derived from) .
Consider a network consisting of three users Alice, Bob, and Charlie. Let , and let
Suppose Trent chooses the numbers . The corresponding linear polynomials are given by
It is now possible to calculate the keys that this scheme would generate:
It is easy to check that , etc., in this example.
If the two users Eve and Oscar conspire, they can determine , , and and therefore find all numbers for all users. They proceed as follows. They know the numbers . The defining equations for the last three of these numbers can be written in matrix form as
The determinant of the matrix is . Since the numbers were chosen to be distinct mod , the determinant is nonzero mod and therefore the system has a unique solution .
Without Eve’s help, Oscar has only a matrix to work with and therefore cannot find . In fact, suppose he wants to calculate the key being used by Alice and Bob. Since (see Exercise 2), there is the matrix equation
The matrix has determinant . Therefore, there is a solution for every possible value of . This means that Oscar obtains no information about .
For each , there are Blom schemes that are secure against coalitions of at most users, but which succumb to conspiracies of users. See [Blom].
Key pre-distribution schemes are often impractical because they require significant resources to initialize and do not allow for keys to be changed or replaced easily when keys are deemed no longer safe. One way around these problems is to introduce a trusted authority, whose task is to distribute new keys to communicating parties as they are needed. This trusted third party may be a server on a computer network, or an organization that is trusted by both Alice and Bob to distribute keys securely.
Authentication is critical to key distribution. Alice and Bob will ask the trusted third party, Trent, to give them keys. They want to make certain that there are no malicious entities masquerading as Trent and sending them false key messages. Additionally, when Alice and Bob exchange messages with each other, they will each need to make certain that the person they are talking to is precisely the person they think they are talking to.
One of the main challenges facing key distribution is the issue of replay attacks. In a replay attack, an opponent may record a message and repeat it at a later time in hope of either pretending to be another party or eliciting a particular response from an entity in order to compromise a key. To provide authentication and protect against replay attacks, we need to make certain that vital information, such as keys and identification parameters, are kept confidential. Additionally, we need to guarantee that each message is fresh; that is, it isn’t a repeat of a message from a long time ago.
The task of confidentiality can be easily accomplished using existing keys already shared between entities. These keys are used to encrypt messages used in the distribution of session keys and are therefore often called key encrypting keys. Unfortunately, no matter how we look at it, there is a chicken-and-egg problem: In order to distribute session keys securely, we must assume that entities have already securely shared key encrypting keys with the trusted authority.
To handle message freshness, however, we typically need to attach extra data fields in each message we exchange. There are three main types of data fields that are often introduced in order to prevent replay attacks:
Sequence numbers: Each message that is sent between two entities has a sequence number associated with it. If an entity ever sees the same sequence number again, then the entity concludes that the message is a replay. The challenge with sequence numbers is that it requires that each party keep track of the sequence numbers it has witnessed.
Timestamps: Each message that is sent between two entities has a statement of the time period for when that message is valid. This requires that both entities have clocks that are set to the same time.
Nonces: A nonce is a random message that is allowed to be used only once and is used as part of a challenge-response mechanism. In a challenge-response, Alice sends Bob a message involving a nonce and requires Bob to send back a correct response to her nonce.
We will now look at two examples of key distribution schemes and analyze attacks that may be used against each in order to bypass the intended security. These two examples should highlight how difficult it is to distribute keys securely.
We begin with a protocol known as the wide-mouthed frog protocol, which is one of the simplest symmetric key management protocols involving a trusted authority. In this protocol, Alice chooses a session key to communicate with Bob and has Trent transfer it to Bob securely:
.
.
Here, is a key shared between Alice and Trent, while is a key shared between Bob and Trent. Alice’s and Bob’s identifying information are given by and , respectively. The parameter is a timestamp supplied by Alice, while is a timestamp given by Trent. It is assumed that Alice, Trent, and Bob have synchronized clocks. Bob will accept as fresh if it arrives within a window of time. The key will be valid for a certain period of time after .
The purpose behind the two timestamps is to allow Bob to check to see that the message is fresh. In the first message, Alice sends a message with a timestamp . If Trent gets the message and the time is not too far off from , he will then agree to translate the message and deliver it to Bob.
The problem with the protocol comes from the second message. Here, Trent has updated the timestamp to a newer timestamp . Unfortunately, this simple change allows for a clever attack in which the nefarious Mallory may cause Trent to extend the lifetime of an old key. Let us step through this attack.
After seeing one exchange of the protocol, Mallory pretends to be Bob wanting to share a key with Alice. Mallory sends Trent the replay .
Trent sends to Alice, with a new timestamp . Alice thinks this is a valid message since it came from Trent and was encrypted using Trent’s key. The key will now be valid for a period of time after .
Mallory then pretends to be Alice and gets . The key will now be valid for a period of time after .
Mallory continues alternately playing Trent against Bob and then Trent against Alice.
The net result is that the malicious Mallory can use Trent as an agent to force Alice and Bob to continue to use indefinitely. Of course, Alice and Bob should keep track of the fact that they have seen before and begin to suspect that something suspicious is going on when they repeatedly see . The protocol did not explicitly state that this was necessary, however, and security protocols should be very explicit on what it is that they assume and don’t assume. The true problem, though, is the fact that Trent replaces with . If Trent had not changed and instead had left as the timestamp, then the protocol would have been better off.
Another example of an authenticated key exchange protocol is due to Needham and Schroeder. In the Needham–Schroeder protocol, Alice and Bob wish to obtain a session key from Trent so that they can talk with each other. The protocol involves the following steps:
Just as in the earlier protocol, is a key shared between Alice and Trent, while is a key shared between Bob and Trent. Unlike the wide-mouthed frog protocol, the Needham–Schroeder protocol does not employ timestamps but instead uses random numbers and as nonces. In the first step, Alice sends Trent her request, which is a statement of who she is and whom she wants to talk to, along with a random number . Trent gives Alice the session key and gives Alice a package that she will deliver to Bob. In the next step, she delivers the package to Bob. Bob can decrypt this to get the session key and the identity of the person he is talking with. Next, Bob sends Alice his own challenge by sending the second nonce . In the final step, Alice proves her identity to Bob by answering his challenge. Using instead of prevents Mallory from replaying message 4.
The purpose of is to prevent the reuse of old messages. Suppose is omitted from Steps 1 and 2. If Eve sees that Alice wants to communicate with Bob, she could intercept Trent’s message in Step 2 and substitute a transmission from a previous Step 2. Then Alice and Bob would communicate with a previously used session key, something that should generally be avoided. When is omitted, Alice has no way of knowing that this happened unless she has kept a list of previous session keys .
Observe that the key exchange portion of the protocol is completed at the end of the third step. The last two exchanges, however, seem a little out of place and deserve some more discussion. The purpose of the nonce in step 4 and step 5 is to prevent replay attacks in which Mallory sends an old to Bob. If we didn’t have step 4 and step 5, Bob would automatically assume that is the correct key to use. Mallory could use this strategy to force Bob to send out more messages to Alice involving . Step 4 and step 5 allow Bob to issue a challenge to Alice where she can prove to Bob that she really knows the session key . Only Alice should be able to use to calculate .
In spite of the the apparent security that the challenge-response in step 4 and step 5 provides, there is a potential security problem that can arise if Mallory ever figures out the session key . Let us step through this possible attack.
Here, Mallory replays an old message from step 3 of Needham–Schroeder as if Mallory were Alice. When Bob gets this message, he issues a challenge to Alice in the form of a new nonce . Mallory can intercept this challenge and, since she knows the session key , she can respond correctly to the challenge. The net result is that Mallory will have passed Bob’s authentication challenge as if she were Alice. From this point on, Bob will communicate using and believe he is communicating with Alice. Mallory can use Alice’s identity to complete her evil plans.
Building a solid key distribution protocol is very tough. There are many examples in the security literature of key distribution schemes that have failed because of a clever attack that was found years later. It might seem a lost cause since we have examined two protocols that both have weaknesses associated with them. However, in the rest of this chapter we shall look at protocols that have so far proven successful. We begin our discussion of successful protocols in the next section, where we will discuss Kerberos, which is an improved variation of the Needham–Schroeder key exchange protocol. Kerberos has withstood careful scrutiny by the community and has been adopted for use in many applications.
Kerberos (named for the three-headed dog that guarded the entrance to Hades) is a real-world implementation of a symmetric cryptography protocol whose purpose is to provide strong levels of authentication and security in key exchange between users in a network. Here we use the term users loosely, as a user might be an individual, or it might be a program requesting communication with another program. Kerberos grew out of a larger development project at MIT known as Project Athena. The purpose of Athena was to provide a huge network of computer workstations for the undergraduate student body at MIT, allowing students to access their files easily from anywhere on the network. As one might guess, such a development quickly raised questions about network security. In particular, communication across a public network such as Athena is very insecure. It is easily possible to observe data flowing across a network and look for interesting bits of information such as passwords and other types of information that one would wish to remain private. Kerberos was developed in order to address such security issues. In the following, we present the basic Kerberos model and describe what it is and what it attempts to do. For more thorough descriptions, see [Schneier].
Kerberos is based on a client/server architecture. A client is either a user or some software that has some task that it seeks to accomplish. For example, a client might wish to send email, print documents, or mount devices. Servers are larger entities whose function is to provide services to the clients. As an example, on the Internet and World Wide Web there is a concept of a domain name server (DNS), which provides names or addresses to clients such as email programs or Internet browsers.
The basic Kerberos model has the following participants:
Cliff: a client
Serge: a server
Trent: a trusted authority
Grant: a ticket-granting server
The trusted authority is also known as an authentication server. To begin, Cliff and Serge have no secret key information shared between them, and it is the purpose of Kerberos to give each of them information securely. A result of the Kerberos protocol is that Serge will have verified Cliff’s identity (he wouldn’t want to have a conversation with a fake Cliff, would he?), and a session key will be established.
The protocol, depicted in Figure 15.1, begins with Cliff requesting a ticket for ticket-granting service from Trent. Since Trent is the powerful trusted authority, he has a database of password information for all the clients (for this reason, Trent is also sometimes referred to as the Kerberos server). Trent returns a ticket that is encrypted with the client’s secret password information. Cliff would now like to use the service that Serge provides, but before he can do this, he must be allowed to talk to Serge. Cliff presents his ticket to Grant, the ticket-granting server. Grant takes this ticket, and if everything is OK (recall that the ticket has some information identifying Cliff), then Grant gives a new ticket to Cliff that will allow Cliff to make use of Serge’s service (and only Serge’s service; this ticket will not be valid with Sarah, a different server). Cliff now has a service ticket, which he can present to Serge. He sends Serge the service ticket as well as an authentication credential. Serge checks the ticket with the authentication credential to make sure it is valid. If this final exchange checks out, then Serge will provide the service to Cliff.
The Kerberos protocol is a formal version of protocols we use in everyday life, where different entities are involved in authorizing different steps in a process; for example, using an ATM to get cash, then buying a ticket for a ride at a fair.
We now look at Kerberos in more detail. Kerberos makes use of a symmetric encryption algorithm. In the original release of Kerberos Version V, Kerberos used DES operating in CBC mode; however, Version V was later updated to allow for more general symmetric encryption algorithms.
Cliff to Trent: Cliff sends a message to Trent that contains his name and the name of the ticket-granting server that he will use (in this case Grant).
Trent to Cliff: Trent looks up Cliff’s name in his database. If he finds it, he generates a session key that will be used between Cliff and Grant. Trent also has a secret key with which he can communicate with Cliff, so he uses this to encrypt the Cliff–Grant session key:
In addition, Trent creates a Ticket Granting Ticket (TGT), which will allow Cliff to authenticate himself to Grant. This ticket is encrypted using Grant’s personal key (which Trent also has):
Here is used to denote concatenation. The ticket that Cliff receives is the concatenation of these two subtickets:
Cliff to Grant: Cliff can extract using the key , which he shares with Trent. Using , Cliff can now communicate securely with Grant. Cliff now creates an authenticator, which will consist of his name, his address, and a timestamp. He encrypts this using to get
Cliff now sends as well as TGT to Grant so that Grant can administer a service ticket.
Grant to Cliff: Grant now has and TGT. Part of TGT was encrypted using Grant’s secret key, so Grant can extract this portion and can decrypt it. Thus he can recover Cliff’s name, Cliff’s address, Timestamp1, as well as . Grant can now use to decrypt in order to verify the authenticity of Cliff’s request. That is, will provide another copy of Cliff’s name, Cliff’s address, and a different timestamp. If the two versions of Cliff’s name and address match, and if Timestamp1 and Timestamp2 are sufficiently close to each other, then Grant will declare Cliff valid. Now that Cliff is approved by Grant, Grant will generate a session key for Cliff to communicate with Serge and will also return a service ticket. Grant has a secret key , which he shares with Serge. The service ticket is
Here ExpirationTime is a quantity that describes the length of validity for this service ticket. The session key is encrypted using a session key between Cliff and Grant:
Grant sends ServTicket and to Cliff.
Cliff to Serge: Cliff is now ready to start making use of Serge’s services. He starts by decrypting in order to get the session key that he will use while communicating with Serge. He creates an authenticator to use with Serge:
Cliff now sends Serge as well as ServTicket. Serge can decrypt ServTicket and extract from this the session key that he is to use with Cliff. Using this session key, he can decrypt and verify that Cliff is who he says he is, and that Timestamp4 is within ExpirationTime of Timestamp3. If Timestamp4 is not within ExpirationTime of Timestamp3, then Cliff’s ticket is stale and Serge rejects his request for service. Otherwise, Cliff and Serge may make use of to perform their exchange.
Some comments about the different versions of Kerberos is appropriate. There are two main versions of Kerberos that one will find discussed in the literature: Kerberos Version IV, which was developed originally as part of MIT’s Project Athena; and Kerberos Version V, which was originally published in 1993 and was intended to address limitations and weaknesses in Version IV. Kerberos Version V was subsequently revised in 2005. We now describe some of the differences between the two versions.
Both Version IV and Version V follow the basic model that we have presented. Kerberos Version IV was designed to work with the computer networks associated with Project Athena, and consequently Version V was enhanced to support authentication and key exchange on larger networks. In particular, Kerberos Version IV was limited in its use to Internet Protocol version 4 (IPV4) addresses to specify clients and servers, while Kerberos Version V expanded the use of network protocols to support multiple IP addresses and addresses associated with other types of network protocols, such as the longer addresses of Internet Protocol version 6 (IPV6).
In terms of the cryptography that Kerberos uses, Version IV only allowed the use of the Data Encryption Standard (DES) with a nonstandard mode of operation known as Propagating Cipher Block Chaining (PCBC). Due to concerns about the security of PCBC mode, it was removed in Version V. Instead, Version V allows for the use of more general symmetric encryption algorithms, such as the use of AES, and incorporated integrity checking mechanisms directly into its use of the normal CBC mode of operation.
The tickets used in Kerberos Version IV had strict limits to their duration. In Kerberos Version V, the representation for the tickets was expanded to include start and stop times for the certificates. This allows for tickets in Version V to be specified with arbitrary durations. Additionally, the functionality of tickets in Version V was expanded. In Version V, it is possible for servers to forward, renew, and postdate tickets, while in Version IV authentication forwarding is not allowed. Overall, the changes made to Version V were intended to make Kerberos more secure and more flexible, allowing it to operate in more types of networks, support a broader array of cipher algorithms, and allow for it to address a broader set of application requirements.
Public key cryptography is a powerful tool that allows for authentication, key distribution, and non-repudiation. In these applications, the public key is published, but when you access public keys, what assurance do you have that Alice’s public key actually belongs to Alice? Perhaps Eve has substituted her own public key in place of Alice’s. Unless confidence exists in how the keys were generated, and in their authenticity and validity, the benefits of public key cryptography are minimal.
In order for public key cryptography to be useful in commercial applications, it is necessary to have an infrastructure that keeps track of public keys. A public key infrastructure, or PKI for short, is a framework consisting of policies defining the rules under which the cryptographic systems operate and procedures for generating and publishing keys and certificates.
All PKIs consist of certification and validation operations. Certification binds a public key to an entity, such as a user or a piece of information. Validation guarantees that certificates are valid.
A certificate is a quantity of information that has been signed by its publisher, who is commonly referred to as the certification authority (CA). There are many types of certificates. Two popular ones are identity certificates and credential certificates. Identity certificates contain an entity’s identity information, such as an email address, and a list of public keys for the entity. Credential certificates contain information describing access rights. In either case, the data are typically encrypted using the CA’s private key.
Suppose we have a PKI, and the CA publishes identity certificates for Alice and Bob. If Alice knows the CA’s public key, then she can take the encrypted identity certificate for Bob that has been published and extract Bob’s identity information as well as a list of public keys needed to communicate securely with Bob. The difference between this scenario and the conventional public key scenario is that Bob doesn’t publish his keys, but instead the trust relationship is placed between Alice and the publisher. Alice might not trust Bob as much as she might trust a CA such as the government or the phone company. The concept of trust is critical to PKIs and is perhaps one of the most important properties of a PKI.
It is unlikely that a single entity could ever keep track of and issue every Internet user’s public keys. Instead, PKIs often consist of multiple CAs that are allowed to certify each other and the certificates they issue. Thus, Bob might be associated with a different CA than Alice, and when requesting Bob’s identity certificate, Alice might only trust it if her CA trusts Bob’s CA. On large networks like the Internet, there may be many CAs between Alice and Bob, and it becomes necessary for each of the CAs between her and Bob to trust each other.
In addition, most PKIs have varying levels of trust, allowing some CAs to certify other CAs with varying degrees of trust. It is possible that CAs may only trust other CAs to perform specific tasks. For example, Alice’s CA may only trust Bob’s CA to certify Bob and not certify other CAs, while Alice’s CA may trust Dave’s CA to certify other CAs. Trust relationships can become very elaborate, and, as these relationships become more complex, it becomes more difficult to determine to what degree Alice will trust a certificate that she receives.
In the following two sections, we discuss two examples of PKIs that are used in practice.
Suppose you want to buy something on the Internet. You go to the website Gigafirm.com, select your items, and then proceed to the checkout page. You are asked to enter your credit card number and other information. The website assures you that it is using secure public key encryption, using Gigafirm’s public key, to set up the communications. But how do you know that Eve hasn’t substituted her public key? In other words, when you are using public keys, how can you be sure that they are correct? This is the purpose of Digital Certificates.
One of the most popular types of certificate is the X.509. In this system, every user has a certificate. The validity of the certificates depends on a chain of trust. At the top is a certification authority (CA). These are often commercial companies such as VeriSign, GTE, AT&T, and others. It is assumed that the CA is trustworthy. The CA produces its own certificate and signs it. This certificate is often posted on the CA’s website. In order to ensure that their services are used frequently, various CAs arrange to have their certificates packaged into Internet browsers such as Chrome, Firefox, Safari, Internet Explorer, and Edge.
The CA then (for a fee) produces certificates for various clients, such as Gigafirm. Such a certificate contains Gigafirm’s public key. It is signed by the CA using the CA’s private key. Often, for efficiency, the CA authorizes various registration authorities (RA) to sign certificates. Each RA then has a certificate signed by the CA.
A certificate holder can sometimes then sign certificates for others. We therefore get a certification hierarchy where the validity of each certificate is certified by the user above it, and this continues all the way up to the CA.
If Alice wants to verify that Gigafirm’s public key is correct, she uses her copy of the CA’s certificate (stored in her computer) to get the CA’s public key. She then uses it to verify the signature on Gigafirm’s certificate. If it is valid, she trusts the certificate and thus has a trusted public key for Gigafirm. Of course, she must trust the CA’s public key. This means that she trusts the company that packaged the CA’s certificate into her computer. The computer company of course has a financial incentive to maintain a good reputation, so this trust is reasonable. But if Alice has bought a used computer in which Eve has tampered with the certificates, there might be a problem (in other words, don’t buy used computers from your enemies, except to extract unerased information).
Figures 15.3, 15.4, and 15.5 show examples of X.509 certificates. The ones in Figures 15.3 and 15.4 are for a CA, namely VeriSign. The part in Figure 15.3 gives the general information about the certificate, including its possible uses. Figure 15.4 gives the detailed information. The one in Figure 15.5 is an edited version of the Details part of a certificate for the bank Wells Fargo.
Some of the fields in Figure 15.4 are as follows:
Version: There are three versions, the first being Version 1 (from 1988) and the most recent being Version 3 (from 1997).
Serial number: There is a unique serial number for each certificate issued by the CA.
Signature algorithm: Various signature algorithms can be used. This one uses RSA to sign the output of the hash function SHA-1.
Issuer: The name of the CA that created and signed this certificate. OU is the organizational unit, is the organization, is the country.
Subject: The name of the holder of this certificate.
Public key: Several options are possible. This one uses RSA with a 1024-bit modulus. The key is given in hexadecimal notation. In hexadecimal, the letters represent the numbers 10, 11, 12, 13, 14, 15. Each pair of symbols is a byte, which is eight bits. For example, b6 represents 11, 6, which is 10110110 in binary.
The last three bytes of the public key are 01 00 01, which is . This is a very common encryption exponent for RSA, since raising something to this power by successive squaring (see Section 3.5) is fast. The preceding bytes 02 03 and the bytes 30 81 89 02 81 81 00 at the beginning of the key are control symbols. The remaining 128 bytes aa d0 ba ⋯ 6b e7 75 are the 1024-bit RSA modulus .
Signature: The preceding information on the certificate is hashed using the hash algorithm specified – in this case, SHA-1 – and then signed by raising to the CA’s private RSA decryption exponent.
The certificate in Figure 15.5 has a few extra lines. One notable entry is under the heading Certificate Hierarchy. The certificate of Wells Fargo has been signed by the Registration Authority (RA) on the preceding line. In turn, the RA’s certificate has been signed by the root CA. Another entry worth noting is CRL Distribution Points. This is the certificate revocation list. It contains lists of certificates that have been revoked. There are two common methods of distributing the information from these lists to the users. Neither is perfect. One way is to send out announcements whenever a certificate is revoked. This has the disadvantage of sending a lot of irrelevant information to most users (most people don’t need to know if the Point Barrow Sunbathing Club loses its certificate). The second method is to maintain a list (such as the one at the listed URL) that can be accessed whenever needed. The disadvantage here is the delay caused by checking each certificate. Also, such a website could get overcrowded if many people try to access it at once. For example, if everyone tries to trade stocks during their lunch hour, and the computers check each certificate for revocation during each transaction, then a site could be overwhelmed.
When Alice (or, usually, her computer) wants to check the validity of the certificate in Figure 15.5, she sees from the certificate hierarchy that VeriSign’s RA signed Wells Fargo’s certificate and the RA’s certificate was signed by the root CA. She verifies the signature on Wells Fargo’s certificate by using the public key (that is, the RSA pair ) from the RA’s certificate; namely, she raises the encrypted hash value to the th power mod . If this equals the hash of Wells Fargo’s certificate, then she trusts Wells Fargo’s certificate, as long as she trusts the RA’s certificate. Similarly, she can check the RA’s certificate using the public key on the root CA’s certificate. Since she received the root CA’s certificate from a reliable source (for example, it was packaged in her Internet browser, and the company doing this has a financial incentive to keep a good reputation), she trusts it. In this way, Alice has established the validity of Wells Fargo’s certificate. Therefore, she can confidently do online transactions with Wells Fargo.
There are two levels of certificates. The high assurance certificates are issued by the CA under fairly strict controls. High assurance certificates are typically issued to commercial firms. The low assurance certificates are issued more freely and certify that the communications are from a particular source. Therefore, if Bob obtains such a certificate for his computer, the certificate verifies that it is Bob’s computer but does not tell whether it is Bob or Eve using the computer. The certificates on many personal computers contain the following line:
Subject: Verisign Class 1 CA Individual Subscriber - Persona Not Validated.
This indicates that the certificate is a low assurance certificate. It does not make any claim as to the identity of the user.
If your computer has Edge, for example, click on Tools, then Internet Options, then Content, then Certificates. This will allow you to find the CA’s whose certificates have been packaged with the browser. Usually, the validity of most of them has not been checked. But for the accepted ones, it is possible to look at the certification path, which gives the path (often one step) from the user’s computer’s certificate back to the CA.
Pretty Good Privacy, more commonly known as PGP, was developed by Phil Zimmerman in the late 1980s and early 1990s. In contrast to X.509 certificates, PGP is a very decentralized system with no CA. Each user has a certificate, but the trust in this certificate is certified to various degrees by other users. This creates a web of trust.
For example, if Alice knows Bob and can verify directly that his certificate is valid, then she signs his certificate with her public key. Charles trusts Alice and has her public key, and therefore can check that Alice’s signature on Bob’s certificate is valid. Charles then trusts Bob’s certificate. However, this does not mean that Charles trusts certificates that Bob signs – he trusts Bob’s public key. Bob could be gullible and sign every certificate that he encounters. His signature would be valid, but that does not mean that the certificate is.
Alice maintains a file with a keyring containing the trust levels Alice has in various people’s signatures. There are varying levels of trust that she can assign: no information, no trust, partial trust, and complete trust. When a certificate’s validity is being judged, the PGP program accepts certificates that are signed by someone Alice trusts, or a sufficient combination of partial trusts. Otherwise it alerts Alice and she needs to make a choice on whether to proceed.
The primary use of PGP is for authenticating and encrypting email. Suppose Alice receives an email asking for her bank account number so that Charles can transfer millions of dollars into her account. Alice wants to be sure that this email comes from Charles and not from Eve, who wants to use the account number to empty Alice’s account. In the unlikely case that this email actually comes from her trusted friend Charles, Alice sends her account information, but she should encrypt it so that Eve cannot intercept it and empty Alice’s account. Therefore, the first email needs authentication that proves that it comes from Charles, while the second needs encryption. There are also cases where both authentication and encryption are desirable. We’ll show how PGP handles these situations.
To keep the discussion consistent, we’ll always assume that Alice is sending a message to Bob. Alice’s RSA public key is and her private key is .
Authentication.
Alice uses a hash function and computes the hash of the message.
Alice signs the hash by raising it to her secret decryption exponent . The resulting hash code is put at the beginning of the message, which is sent to Bob.
Bob raises the hash code to Alice’s public RSA exponent . The result is compared to the hash of the rest of the message.
If the result agrees with the hash, and if Bob trusts Alice’s public key, the message is accepted as coming from Alice.
This authentication is the RSA signature method from Section 13.1. Note the role that trust plays. If Bob does not trust Alice’s public key as belonging to Alice, then he cannot be sure that the message did not come from Eve, with Eve’s signature in place of Alice’s.
Encryption.
Alice’s computer generates a random number, usually 128 bits, to be used as the session key for a symmetric private key encryption algorithm such as 3DES, IDEA, or CAST-128 (these last two are block ciphers using 128-bit keys).
Alice uses the symmetric algorithm with this session key to encrypt her message.
Alice encrypts the session key using Bob’s public key.
The encrypted key and the encrypted message are sent to Bob.
Bob uses his private RSA key to decrypt the session key. He then uses the session key to decrypt Alice’s message.
The combination of a public key algorithm and a symmetric algorithm is used because encryption is generally faster with symmetric algorithms than with public key algorithms. Therefore, the public key algorithm RSA is used for the small encryption of the session key, and then the symmetric algorithm is used to encrypt the potentially much larger message.
Authentication and Encryption
Alice hashes her message and signs the hash to obtain the hash code, as in step (2) of the authentication procedure described previously. This hash code is put at the beginning of the message.
Alice produces a random 128-bit session key and uses a symmetric algorithm with this session key to encrypt the hash code together with the message, as in the encryption procedure described previously.
Alice uses Bob’s public key to encrypt the session key.
The encrypted session key and the encryption of the hash code and message are sent to Bob.
Bob uses his private key to decrypt the session key.
Bob uses the session key to obtain the hash code and message.
Bob verifies the signature by using Alice’s public key, as in the authentication procedure described previously.
Of course, this procedure requires that Bob trusts Alice’s public key certificate. Also, the reason the signature is done before the encryption is so that Bob can discard the session key after decrypting and therefore store the plaintext message with its signature.
To set up a PGP certificate, Alice’s computer uses random input obtained from keystrokes, timing, mouse movements, etc. to find primes , and then produce an RSA modulus and encryption and decryption exponents and . The numbers and are then Alice’s public key. Alice also chooses a secret passphrase. The secret key is stored securely in her computer. When the computer needs to use her private key, the computer asks her for her passphrase to be sure that Alice is the correct person. This prevents Eve from using Alice’s computer and pretending to be Alice. The advantage of the passphrase is that Alice is not required to memorize or type in the decryption exponent , which is probably more than one hundred digits long.
In the preceding, we have used RSA for signatures and for encryption of the session keys. Other possibilities are allowed. For example, Diffie-Hellman can be used to establish the session key, and DSA can be used to sign the message.
If you have ever paid for anything over the Internet, your transactions were probably kept secret by SSL or its close relative TLS. Secure Sockets Layer (SSL) was developed by Netscape in order to perform http communications securely. The first version was released in 1994. Version 3 was released in 1995. Transport Layer Security (TLS) is a slight modification of SSL version 3 and was released by the Internet Engineering Task Force in 1999. These protocols are designed for communications between computers with no previous knowledge of each other’s capabilities.
In the following, we’ll describe SSL version 3. TLS differs in a few minor details such as how the pseudorandom numbers are calculated. SSL consists of two main components. The first component is known as the record protocol and is responsible for compressing and encrypting the bulk of the data sent between two entities. The second component is a collection of management protocols that are responsible for setting up and maintaining the parameters used by the record protocol. The main part of this component is called the handshake protocol.
We will begin by looking at the handshake protocol, which is the most complicated part of SSL. Let us suppose that Alice has bought something online from Gigafirm and wants to pay for her purchase. The handshake protocol performs authentication between Alice’s computer and the server at Gigafirm and is used to allow Alice and Gigafirm to agree upon various cryptographic algorithms. Alice’s computer starts by sending Gigafirm’s computer a message containing the following:
The highest version of SSL that Alice’s computer can support
A random number consisting of a 4-byte timestamp and a 28-byte random number
A Cipher Suite containing, in decreasing order of preference, the algorithms that Alice’s computer wants to use for public key (for example, RSA, Diffie-Hellman, ...), block cipher encryption (3DES, DES, AES, ...), hashing (SHA-1, MD5, ...), and compression (PKZip, ...)
Gigafirm’s computer responds with a random 32-byte number (chosen similarly) and its choices of which algorithms to use; for example, RSA, DES, SHA-1, PKZip.
Gigafirm’s computer then sends its X.509 certificate (and the certificates in its certification chain). Gigafirm can ask for Alice’s certificate, but this is rarely done for two reasons. First, it would impede the transaction, especially if Alice does not have a valid certificate. This would not help Gigafirm accomplish its goal of making sales. Secondly, Alice is going to send her credit card number later in the transaction, and this serves to verify that Alice (or the thief who picked her pocket) has Alice’s card.
We’ll assume from now on that RSA was chosen for the public key method. The protocol differs only slightly for other public key methods.
Alice now generates a 48-byte pre-master secret, encrypts it with Gigafirm’s public key (from its certificate), and sends the result to Gigafirm, who decrypts it. Both Alice and Gigafirm now have the following secret random numbers:
The 32-byte random number that Alice sent Gigafirm.
The 32-byte random number that Gigafirm sent Alice.
The 48-byte pre-master secret .
Note that the two 32-byte numbers were not sent securely. The pre-master secret is secure, however.
Since they both have the same numbers, both Alice and Gigafirm can calculate the master secret as the concatenation of
The , , and are strings added for padding. Note that timestamps are built into and . This prevents Eve from doing replay attacks, where she tries to use information intercepted from one session to perform similar transactions later.
Since MD5 produces a 128-bit (= 16-byte) output, the master secret has 48 bytes. The master secret is used to produce a key block, by the same process that the master secret was produced from the pre-master secret. Enough hashes are concatenated to produce a sufficiently long key block. The key block is then cut into six secret keys, three for communications from Alice to Gigafirm and three for communications from Gigafirm to Alice. For Alice to Gigafirm, one key serves as the secret key in the block cipher (3DES, AES, ...) chosen at the beginning of the communications. The second is a message authentication key. The third is the initial value for the CBC mode of the block cipher. The three other keys are for the corresponding purposes for Gigafirm to Alice.
Now Alice and Gigafirm are ready to communicate using the record protocol. When Alice sends a message to Gigafirm, she does the following:
Compresses the message using the agreed-upon compression method.
Hashes the compressed message together with the message authentication key (the second key obtained from the key block). This yields the hashed message authentication code.
Uses the block cipher in CBC mode to encrypt the compressed message together with the hashed message authentication code, and sends the result to Gigafirm.
Gigafirm now does the following:
Uses the block cipher to decrypt the message received. Gigafirm now has the compressed message and the hashed message authentication code.
Uses the compressed message and the Alice-to-Gigafirm message authentication key to recompute the hashed message authentication code. If it agrees with the hashed message authentication code that was in the message, the message is authenticated.
Decompresses the compressed message to obtain Alice’s message.
Communications from Gigafirm are encrypted and decrypted similarly, using the other three keys deduced from the key block. Therefore, Alice and Gigafirm can exchange information securely.
Every time someone places an order in an electronic transaction over the Internet, large quantities of information are transmitted. These data must be protected from unwanted eavesdroppers in order to ensure the customer’s privacy and prevent credit fraud. Requirements for a good electronic commerce system include the following:
Authenticity: Participants in a transaction cannot be impersonated and signatures cannot be forged.
Integrity: Documents such as purchase orders and payment instructions cannot be altered.
Privacy: The details of a transaction should be kept secure.
Security: Sensitive account information such as credit card numbers must be protected.
All of these requirements should be satisfied, even over public communication channels such as the Internet.
In 1996, the credit card companies MasterCard and Visa called for the establishment of standards for electronic commerce. The result, whose development involved several companies, is called the SET, or Secure Electronic protocol. It starts with the existing credit card system and allows people to use it securely over open channels.
The SET protocol is fairly complex, involving, for example, the SSL protocol in order to certify that the cardholder and merchant are legitimate and also specifying how payment requests are to be made. In the following we’ll discuss one aspect of the whole protocol, namely the use of dual signatures.
There are several possible variations on the following. For example, in order to improve speed, a fast symmetric key system can be used in conjunction with the public key system. If there is a lot of information to be transmitted, a randomly chosen symmetric key plus the hash of the long message can be sent via the public key system, while the long message itself is sent via the faster symmetric system. However, we’ll restrict our attention to the simplest case where only public key methods are used.
Suppose Alice wants to buy a book entitled How to Use Other People’s Credit Card Numbers to Defraud Banks, which she has seen advertised on the Internet. For obvious reasons, she feels uneasy about sending the publisher her credit card information, and she certainly does not want the bank that issued her card to know what she is buying. A similar situation applies to many transactions. The bank does not need to know what the customer is ordering, and for security reasons the merchant should not know the card number. However, these two pieces of information need to be linked in some way. Otherwise the merchant could attach the payment information to another order. Dual signatures solve this problem.
The three participants in the following will be the Cardholder (namely, the purchaser), the Merchant, and the Bank (which authorizes the use of the credit card).
The Cardholder has two pieces of information:
, which consists of the cardholder’s and merchant’s names, the quantities of each item ordered, the prices, etc.
, including the merchant’s name, the credit card number, the total price, etc.
The system uses a public hash function; let’s call it . Also, a public key cryptosystem such as RSA is used, and the Cardholder and the Bank have their own public and private keys. Let , , and denote the (public) encryption functions for the Cardholder, the Merchant, and the Bank, and let , , and be the (private) decryption functions.
The Cardholder performs the following procedures:
Calculates , which is the message digest, or hash, of an encryption of .
Calculates , which is the message digest of an encryption of .
Concatenates and to obtain , then computes the hash of the result to obtain the payment-order message digest .
Signs by computing . This is the dual signature.
Sends , , , and to the Merchant.
The Merchant then does the following:
Calculates (which should equal GSOMD).
Calculates and . If they are equal, then the Merchant has verified the Cardholder’s signature and is therefore convinced that the order is from the Cardholder.
Computes to obtain .
Sends , , and to the Bank.
The Bank now performs the following:
Computes (which should equal PIMD).
Concatenates and .
Computes and . If they are equal, the Bank has verified the Cardholder’s signature.
Computes , obtaining the payment instructions .
Returns an encrypted (with ) digitally signed authorization to the Merchant, guaranteeing payment.
The Merchant completes the procedure as follows:
Returns an encrypted (with ) digitally signed receipt to the Cardholder, indicating that the transaction has been completed.
The Merchant only sees the encrypted form of the payment instructions, and so does not see the credit card number. It would be infeasible for the Merchant or the Bank to modify any of the information regarding the order because the hash function is used to compute .
The Bank only sees the message digest of the Goods and Services Order, and so has no idea what is being ordered.
The requirements of integrity, privacy, and security are met by this procedure. In actual implementations, several more steps are required in order to protect authenticity. For example, it must be guaranteed that the public keys being used actually belong to the participants as claimed, not to impostors. Certificates from a trusted authority are used for this purpose.
In a network of three users, A, B, and C, we would like to use the Blom scheme to establish session keys between pairs of users. Let and let
Suppose Trent chooses the numbers
Calculate the session keys.
Show that in the Blom scheme, .
Show that .
Another way to view the Blom scheme is by using a polynomial in two variables. Define the polynomial . Express the key in terms of .
You (U) and I (I) are evil users on a network that uses the Blom scheme for key establishment with . We have decided to get together to figure out the other session keys on the network. In particular, suppose and . We have received , , , from Trent, the trusted authority. Calculate and .
Here is another version of the intruder-in-the-middle attack on the Diffie-Hellman key exchange in Section 10.1. It has the “advantage” that Eve does not have to intercept and retransmit all the messages between Bob and Alice. Suppose Eve discovers that , where is an integer and is small. Eve intercepts and as before. She sends Bob and sends Alice .
Show that Alice and Bob each calculate the same key .
Show that there are only possible values for , so Eve may find by exhaustive search.
Bob, Ted, Carol, and Alice want to agree on a common key (cryptographic key, that is). They publicly choose a large prime and a primitive root . They privately choose random numbers , respectively. Describe a protocol that allows them to compute securely (ignore intruder-in-the-middle attacks).
Suppose naive Nelson tries to implement an analog of the three-pass protocol of Section 3.6 to send a key to Heidi. He chooses a one-time pad key and XORs it with . He sends to Heidi. She XORs what she receives with her one-time pad key to get . Heidi sends to Nelson, who computes . Nelson sends to Heidi, who recovers as .
Show that .
Suppose Eve intercepts . How can she recover ?
Suppose Congressman Bill Passer is receiving large donations from his friend Phil Pockets. For obvious reasons, he would like to hide this fact, pretending instead that the money comes mostly from people such as Vera Goode. Or perhaps Phil does not want Bill to know he’s the source of the money. If Phil pays by check, well-placed sources in the bank can expose him. Similarly, Congressman Passer cannot receive payments via credit card. The only anonymous payment scheme seems to be cash.
But now suppose Passer has remained in office for many terms and we are nearing the end of the twenty-first century. All commerce is carried out electronically. Is it possible to have electronic cash? Several problems arise. For example, near the beginning of the twenty-first century, photocopying money was possible, though a careful recipient could discern differences between the copy and the original. Copies of electronic information, however, are indistinguishable from the original. Therefore, someone who has a valid electronic coin could make several copies. Some method is needed to prevent such double spending. One idea would be for a central bank to have records of every coin and who has each one. But if coins are recorded as they are spent, anonymity is compromised. Occasionally, communications with a central bank could fail temporarily, so it is also desirable for the person receiving the coin to be able to verify the coin as legitimate without contacting the bank during each transaction.
T. Okamoto and K. Ohta [Okamoto-Ohta] list six properties a digital cash system should have:
The cash can be sent securely through computer networks.
The cash cannot be copied and reused.
The spender of the cash can remain anonymous. If the coin is spent legitimately, neither the recipient nor the bank can identify the spender.
The transaction can be done off-line, meaning no communication with the central bank is needed during the transaction.
The cash can be transferred to others.
A piece of cash can be divided into smaller amounts.
Okamoto and Ohta gave a system that satisfies all these requirements. Several systems satisfying some of them have been devised by David Chaum and others. In Section 16.2, we describe a system due to S. Brands [Brands] that satisfies 1 through 4. We include it to show the complicated manipulations that are used to achieve these goals. But an underlying basic problem is that it and the Okamoto-Ohta system require a central bank to set up the system. This limits their use.
In Section 16.3, we give an introduction to Bitcoin, a well-known digital currency. By ingeniously relaxing the requirements of Okamoto-Ohta, it discarded the need for a central bank and therefore became much more widely used than its predecessors.
People have always had a need to engage in trade in order to acquire items that they do not have. The history of commerce has evolved from the ancient model of barter and exchange to notions of promises and early forms of credit (“If you give me bread today, then I’ll give you milk in the future”), to the use of currencies and cash, to the merging of currencies and credit in the form of credit cards. All of these have changed over time, and recent shifts toward conducting transactions electronically have forced these technologies to evolve significantly.
Perhaps the most dominant way in which electronic transactions take place is with the use of credit cards. We are all familiar with making purchases at stores using conventional credit cards: our items are scanned at a register, a total bill is presented, and we pay by swiping our card (or inserting a card with a chip) at a credit card scanner. Using credit cards online is not too different. When you want to buy something online from eVendor, you enter your credit card number along with some additional information (e.g., the CVC, expiration date, address). Whether you use a computer, a smartphone, or any other type of device, there is a secure communication protocol working behind the scenes that sends this information to eVendor, and secure protocols support eVendor by contacting the proper banks and credit card agencies to authorize and complete your transaction.
While using credit cards is extremely easy, there are problems with their use in a world that has been increasingly digital. The early 21st century has seen many examples of companies being hacked and credit card information being stolen. A further problem with the use of credit cards is that companies have the ability to track customer purchases and preferences and, as a result, issues of consumer privacy are becoming more prevalent.
There are alternatives to the use of credit cards. One example is the introduction of an additional layer between the consumer and the vendor. Companies such as PayPal provide such a service. They interact with eVendor, providing a guarantee that eVendor will get paid, while also ensuring that eVendor does not learn who you are. Of course, such a solution begs the question of how much trust one should place in these intermediate companies.
A different alternative comes from looking at society’s use of hard, tangible currencies. Coins and cash have some very nice properties when one considers their use from a security and privacy perspective. First, since they are physical objects representing real value, they provide an immediate protection against any defaulting or credit risk. If Alice wants an object that costs five dollars and she has five dollars, then there is no need for a credit card or an I-Owe-You. The vendor can complete the transaction knowing that he or she has definitely acquired five dollars. Second, cash and coins are not tied to the individual using them, so they provide strong anonymity protection. The vendor doesn’t care about Alice or her identity; it only cares about getting the money associated with the transaction. Likewise, there are no banks directly involved in a transaction. Currency is printed by a central government, and this currency is somehow backed by the government in one way or another.
Cash and coins also have the nice property that they are actually exchanged in a transaction. By this, we mean that when Alice spends her five dollars, she has handed over the money and she is no longer in possession of this money. This means that she can’t spend the same money over and over. Lastly, because of the physical nature of cash and coins, there is no need to communicate with servers and banks to complete a transaction, which allows for transactions to be completed off-line.
In this chapter, we will also discuss the design of digital currencies, starting from one of the early models for digital coins and then exploring the more recent forms of cryptocurrencies by examining the basic cryptographic constructions used in Bitcoin. As we shall see, many of the properties that we take for granted with cash and coins have been particularly difficult to achieve in the digital world.
This section is not needed for the remaining sections of the chapter. It is included because it has some interesting ideas and it shows how hard it is to achieve the desired requirements of a digital currency.
We describe a system due to S. Brands [Brands]. The reader will surely notice that it is much more complicated than the centuries-old system of actual coins. This is because, as we mentioned previously, electronic objects can be reproduced at essentially no cost, in contrast to physical cash, which has usually been rather difficult to counterfeit. Therefore, steps are needed to catch electronic cash counterfeiters. But this means that something like a user’s signature needs to be attached to an electronic coin. How, then, can anonymity be preserved? The solution uses “restricted blind signatures.” This process contributes much of the complexity to the scheme.
Participants are the Bank, the Spender, and the Merchant.
Initialization is done once and for all by some central authority. Choose a large prime such that is also prime (see Exercise 15 in Chapter 13). Let be the square of a primitive root mod . This implies that . Two secret random exponents are chosen, and and are defined to be raised to these exponents mod . These exponents are then discarded (storing them serves no useful purpose, and if a hacker discovers them, then the system is compromised). The numbers
are made public. Also, two public hash functions are chosen. The first, , takes a 5-tuple of integers as input and outputs an integer mod . The second, , takes a 4-tuple of integers as input and outputs an integer mod .
The bank chooses its secret identity number and computes
The number is made public and identifies the bank.
The Spender chooses a secret identity number and computes the account number
The number is sent to the Bank, which stores along with information identifying the Spender (e.g., name, address). However, the Spender does not send to the bank. The Bank sends
to the Spender.
The Merchant chooses an identification number and registers it with the bank.
The Spender contacts the bank, asking for a coin. The bank requires proof of identity, just as when someone is withdrawing classical cash from an account. All coins in the present scheme have the same value. A coin will be represented by a 6-tuple of numbers
This may seem overly complicated, but we’ll see that most of this effort is needed to preserve anonymity and at the same time prevent double spending.
Here is how the numbers are constructed.
The Bank chooses a random number (a different number for each coin), computes
and sends and to the Spender.
The Spender chooses a secret random 5-tuple of integers
The Spender computes
Coins with are not allowed. This can happen in only two ways. One is when , so we require . The other is when , which means the Spender has solved a discrete logarithm problem by a lucky choice of . The prime should be chosen so large that this has essentially no chance of happening.
The Spender computes
and sends to the Bank. Here is the public hash function mentioned earlier.
The Bank computes and sends to the Spender.
The Spender computes
The coin is now complete. The amount of the coin is deducted from the Spender’s bank account.
The procedure, which is quite fast, is repeated each time a Spender wants a coin. A new random number should be chosen by the Bank for each transaction. Similarly, each spender should choose a new 5-tuple for each coin.
The Spender gives the coin to the Merchant. The following procedure is then performed:
The Merchant checks whether
If this is the case, the Merchant knows that the coin is valid. However, more steps are required to prevent double spending.
The Merchant computes
where is the hash function chosen in the initialization phase and is a number representing the date and time of the transaction. The number is included so that different transactions will have different values of . The Merchant sends to the Spender.
The Spender computes
where is the Spender’s secret number, and are part of the secret random 5-tuple chosen earlier. The Spender sends and to the Merchant.
The Merchant checks whether
If this congruence holds, the Merchant accepts the coin. Otherwise, the Merchant rejects it.
A few days after receiving the coin, the Merchant wants to deposit it in the Bank. The Merchant submits the coin plus the triple . The Bank performs the following:
The Bank checks that the coin has not been previously deposited. If it hasn’t been, then the next step is performed. If it has been previously deposited, the Bank skips to the Fraud Control procedures discussed in the next subsection.
The Bank checks that
If so, the coin is valid and the Merchant’s account is credited.
There are several possible ways for someone to try to cheat. Here is how they are dealt with.
The Spender spends the coin twice, once with the Merchant, and once with someone we’ll call the Vendor. The Merchant submits the coin along with the triple . The Vendor submits the coin along with the triple . An easy calculation shows that
Dividing yields . The Bank computes and identifies the Spender. Since the Bank cannot discover otherwise, it has proof (at least beyond a reasonable doubt) that double spending has occurred. The Spender is then sent to jail (if the jury believes that the discrete logarithm problem is hard).
The Merchant tries submitting the coin twice, once with the legitimate triple and once with a forged triple . This is essentially impossible for the Merchant to do, since it is very difficult for the Merchant to produce numbers such that
Someone tries to make an unauthorized coin. This requires finding numbers such that and . This is probably hard to do. For example, starting with , then trying to find , requires solving a discrete logarithm problem just to make the first equation work. Note that the Spender is foiled in attempts to produce a second coin using a new 5-tuple since the values of is known only to the Bank. Therefore, finding the correct value of is very difficult.
Eve L. Dewar, an evil merchant, receives a coin from the Spender and deposits it in the bank, but also tries to spend the coin with the Merchant. Eve gives the coin to the Merchant, who computes , which very likely is not equal to . Eve does not know , but she must choose and such that . This again is a type of discrete logarithm problem. Why can’t Eve simply use the that she already knows? Since , the Merchant would find that .
Someone working in the Bank tries to forge a coin. This person has essentially the same information as Eve, plus the identification number . It is possible to make a coin that satisfies . However, since the Spender has kept secret, the person in the bank will not be able to produce a suitable . Of course, if were allowed, this would be possible; this is one reason is not allowed.
Someone steals the coin from the Spender and tries to spend it. The first verification equation is still satisfied, but the thief does not know and therefore will not be able to produce such that .
Eve L. Dewar, the evil merchant, steals the coin and from the Merchant before they are submitted to the Bank. Unless the bank requires merchants to keep records of the time and date of each transaction, and therefore be able to reproduce the inputs that produced , Eve’s theft will be successful. This, of course, is a flaw of ordinary cash, too.
During the entire transaction with the Merchant, the Spender never needs to provide any identification. This is the same as for purchases made with conventional cash. Also, note that the Bank never sees the values of for the coin until it is deposited by the Merchant. In fact, the Bank provides only the number and the number , and has seen only . However, the coin still contains information that identifies the Spender in the case of double spending. Is it possible for the Merchant or the Bank to extract the Spender’s identity from knowledge of the coin and the triple ? Since the Bank also knows the identification number , it suffices to consider the case where the Bank is trying to identify the Spender. Since are secret random numbers known only to the Spender, and are random numbers. In particular, is a random power of and cannot be used to deduce . The number is simply , and so does not provide any help beyond what is known from . Since and introduce two new secret random exponents , they are again random numbers from the viewpoint of everyone except the Spender.
At this point, there are five numbers, , that look like random powers of to everyone except the Spender. However, when is sent to the Bank, the Bank might try to compute the value of and thus deduce . But the Bank has not seen the coin and so cannot compute . The Bank could try to keep a list of all values it has received, along with values of for every coin that is deposited, and then try all combinations to find . But it is easily seen that, in a system with millions of coins, the number of possible values of is too large for this to be practical. Therefore, it is unlikely that knowledge of , hence of , will help the Bank identify the Spender.
The numbers and provide what Brands calls a restricted blind signature for the coin. Namely, using the coin once does not allow identification of the signer (namely, the Spender), but using it twice does (and the Spender is sent to jail, as pointed out previously).
To see the effect of the restricted blind signature, suppose is essentially removed from the process by taking . Then the Bank could keep a list of values of , along with the person corresponding to each . When a coin is deposited, the value of would then be computed and compared with the list. Probably there would be only one person for a given , so the Bank could identify the Spender.
In this section we provide a brief overview of Bitcoin. For those interested in the broader issues behind the design of cryptocurrencies like Bitcoin, we refer to the next section.
Bitcoin is an example of a ledger-based cryptocurrency that uses a combination of cryptography and decentralized consensus to keep track of all of the transactions related to the creation and exchange of virtual coins, known as bitcoins.
Bitcoin is a very sophisticated collection of cryptography and communication protocols, but the basic structure behind the operation of Bitcoin can be summarized as having five main stages:
Users maintain a transaction ledger;
Users make transactions and announce their transactions;
Users gather transactions into blocks;
Users solve cryptographic puzzles using these blocks;
Users distribute their solved puzzle block.
Let’s start in the middle. There are many users. Transactions (for example, Alice gives three coins to Bob and five coins to Carla) are happening everywhere. Each transaction is broadcast to the network. Each user collects these transactions and verifies that they are legitimate. Each user collects the valid transactions into a block. Suddenly, one user, say Zeno, gets lucky (see “Mining” below). That user broadcasts this news to the network and gets to add his block to the ledger that records all transactions that have ever taken place. The transactions continue throughout the world and continue to be broadcast to everyone. The users add the valid transactions to their blocks, possibly including earlier transactions that were not included in the block that just got added to the ledger. After approximately 10 minutes, another user, let’s say Xenia, gets lucky and is allowed to add her block to the ledger. If Xenia believes that all of the transactions are valid in the block that Zeno added, then Xenia adds her block to the ledger that includes Zeno’s block. If not, then Xenia adds her block to the ledger that was in place before Zeno’s block was added. In either case, Xenia broadcasts what she did.
Eventually, after approximately another 10 minutes, Wesley gets lucky and gets to add his block to the ledger. But what if there are two or more competing branches of the ledger? If Wesley believes that one contains invalid transactions, he does not add to it, and instead chooses among the remaining branches. But if everything is valid, then Wesley chooses the longest branch. In this way, the network builds a consensus as to the validity of transactions. The longer branch has had more randomly chosen users certify the transactions it contains.
Stopping Double Spending. Now, suppose Eve buys something from the vendor Venus and uses the same coins to buy something from the seller Selena. Eve broadcasts two transactions, one saying that she paid the coins to Venus and one saying that she paid the coins to Selena. Some users might add one of the transactions to their blocks, and some add the other transaction to their blocks. There is possibly no way someone can tell which is legitimate. But eventually, say, Venus’s block ends up in a branch of blocks that is longer than the branch containing Selena’s block. Since this branch is longer, it keeps being augmented by new blocks, and the payment to Selena becomes worthless (the other transactions in the block could be included in later additions to the longer branch). What happens to Selena? Has she been cheated? No. After concluding the deal with Eve, Selena waits an hour before delivering the product. By that time, either her payment has been included in the longer branch, or she realizes that Eve’s payment to her is worthless, so Selena does not deliver the product.
Incentives. Whenever a user gets lucky and is chosen to add a block to the ledger, that user collects fees from each transaction that is included in the block. These payments of fees are listed as transactions that form part of the block that is added to the ledger. If the user includes invalid transactions, then it is likely that a new branch will soon be started that does not include this block, and thereby the payments of transaction fees become worthless. So there is an incentive to include many transactions, but there is also an incentive to verify their validity. At present, there is also a reward for being the lucky user, and this is included as a payment in the user’s block that is being added to the ledger. After every 210000 blocks are added to the ledger, which takes around four years at 10 minutes per block, the reward amount is halved. In 2018, the reward stood at 25 bitcoins. The overall system is set up so that there will eventually be a total of 21 million bitcoins in the system. After that, the plan is to maintain these 21 million coins as the only bitcoins, with no more being produced. At that point, the transaction fees are expected to provide enough incentive to keep the system running.
Mining. How is the lucky user chosen? Each user computes
for billions of values of Nonce. Here, is the hash function SHA-256, Nonce is a random bitstring to be found, prevhash is the hash of the previous block in the blockchain, are the transactions that the user is proposing to add to the ledger. On the average, after around hashes are computed worldwide, some user obtains a hash value whose first 66 binary digits are 0s. (These numbers are adjusted from time to time as more users join in order to keep the average spacing at 10 minutes.) This user is the “lucky” one. The nonce that produced the desired hash is broadcast, along with the hash value obtained and the block that is being added to the ledger. The mining then resumes with the updated ledger, at least by users who deem the new block to be valid.
Mining uses enormous amounts of electricity and can be done profitably only when inexpensive electric power is available. When this happens, massive banks of computers are used to compute hash values. The rate of success is directly proportional to the percentage of computer power one user has in relation to the total power of all the users in the world. As long as one party does not have access to a large fraction of the total computing power, the choice of the lucky user will tend to be random enough to prevent cheating by a powerful user.
Users Maintain a Transaction Ledger: The basic structure behind Bitcoin is similar to many of the other ledger-based cryptocurrencies. No actual digital coins are actually exchanged. Rather, a ledger is used to keep track of transactions that take place, and pieces of the ledger are the digital objects that are shared. Each user maintains their own copy of the ledger, which they use to record the community’s collection of transactions. The ledger consists of blocks, structured as a blockchain (see Section 12.7), which are cryptographically signed and which reference previous blocks in the ledger using hash pointers to the previous block in the ledger.
Also, a transaction includes another hash pointer, one to the transaction that says that the spender has the bitcoins that are being spent. This means that when someone else checks the validity of a transaction, it is necessary to look at only the transactions that occurred since that earlier transaction. For example, if George posts a transaction on June 14 where he gives 13 bitcoins to Betsy, George includes a pointer to the transaction on the previous July 4 where Tom paid 18 bitcoins to George. When Alex wants to check the validity of this transaction, he checks only those transactions from July 4 until June 14 to be sure that George didn’t also give the bitcoins to Aaron. Moreover, George also can post a transaction on June 14 that gives the other five bitcoins from July 4 to himself. In that way, the ledger is updated in a way that there aren’t small pieces of long-ago transactions lying around.
Making and Announcing Transactions: A transaction specifies the coins from a previous transaction that are being consumed as input. As output of the transaction, it specifies the address of the recipients and the amount of coins to be delivered to these recipients. For example, in addresses for Bitcoin Version 1, each user has a public/private pair of keys for the Elliptic Curve Digital Signature Algorithm. The user’s address is a 160-bit cryptographic hash of their public key. The 160-bit hash is determined by first calculating the SHA-256 hash of the user’s public key, and then calculating the RIPEMD-160 hash (this is another widely used hash function) of the SHA-256 output. A four-byte cryptographic checksum is added to the 160-bit hash, which is calculated from SHA-256 being applied twice to the 160-bit hash. Finally, this is encoded into an alphanumeric representation.
The transaction is signed by the originator of the transaction using their private key, and finally it is announced to the entire set of users so it can be added to blocks that will be appended to the community’s ledger.
Gathering Transactions into Blocks: Transactions are received by users. Users verify the transactions that they receive and discard any that they are not able to verify. There are several reasons why transactions might not verify. For example, since the communications are taking place on a network, it is possible that not every user will receive the same set of transactions at the same time. Or, users might be malicious and attempt to announce false transactions, and thus Bitcoin relies on users to examine the collection of new transactions and previous transactions to ensure that no malicious behavior is taking place (such as double spending, or attempting to steal coins). Users then gather the transactions they believe are valid into the block that they are forming. A new, candidate block consists of a collection of transactions that a user believes is valid, and these transactions are arranged in a Merkle Tree to allow for efficient searching of transactions within a block. The block also contains a hash pointer to a previous block in the ledger. The hash pointer is calculated using the SHA-256 hash of a previous block.
Anonymity: Any cash system should have some form of anonymity. In Bitcoin, a user is identified only through that user’s public key, not through the actual identity. A single user could register under multiple names so that an observer will not see several transactions being made by one user and deduce the user’s identity. Of course, some anonymity is lost because this user will probably need to make transfers from the account of one of his names to the account of another of his names, so a long-term analysis might reveal information.
An interesting feature of Bitcoin is that registering under multiple names does not give a user more power in the mining operations because the computational resources for mining will be spread over several accounts but will not increase in power. Therefore, the full set of this user’s names has the same probability of being lucky as if the user had opened only a single account. This is much better than the alternative where a consensus could be reached by voting with one vote per account.
In this section, we give a general discussion of issues related to digital cash, using Bitcoin as an example.
The Brands digital cash system from Section 16.2 gives insight into how difficult it can be to re-create the benefits of cash using just cryptography and, even with all its cryptographic sophistry, the Brands scheme is still not able to allow for the cash to be transferred to others or for the digital money to be divided into smaller amounts. Up until the 2000s, almost all digital cash protocols faced many shortcomings when it came to creating digital objects that acted like real-world money. For example, in order to allow for anonymity and to prevent double spending, it was necessary to introduce significant complexity into the scheme. Some protocols were able to work only online (i.e., they required the ability to connect to an agent acting as a bank to verify the validity of the digital currency), some protocols like Brands’s system did not allow users to break cash into smaller denominations and thereby issue change in transactions. Beyond the technical challenges, some systems, like Chaum’s ECash, faced pragmatic hurdles in persuading banks and merchants to adopt their system. Of course, without merchants to buy things from, there won’t be users to make purchases, and this can be the death of a cryptocurrency.
One of the major, practical challenges that the early forms of cryptocurrencies faced was the valuation of their digital cash. For example, just because a digital coin is supposedly worth $100, what actually makes it worth that amount? Many early digital currencies attempted to tie themselves to real-world currencies or to real-world commodities (like gold). While this might seem like a good idea, it also introduced several practical challenges: it implicitly meant that the value of the entire cryptocurrency had to be backed by an equivalent amount of real-world cash or coins. And this was a problem – if these new, digital currencies wanted to exist and be used, then they had to acquire a lot of real-world money to back them.
This also meant that the new forms of digital cash weren’t actually their own currency, but actually just another way to spend an existing currency or commodity. Since needing real-world assets in order to get a cryptocurrency off the ground was a hurdle, it led many to ask “What if one could make the digital cash its own currency that was somehow valued independently from other currencies?”
It turns out that many of the technical solutions needed to overcome the various hurdles that we have outlined already existed in the early 2000s and what was needed was a different perspective. In 2008, Satoshi Nakamoto presented a landmark paper [Nakamoto], which launched the next generation of cryptocurrencies and introduced the well known cryptocurrency Bitcoin. These cryptocurrencies, which include Bitcoin, Ethereum, and many others, have been able to overcome many of the hurdles that stifled previous attempts such as ECash. Currently, currencies like Bitcoin and Ethereum are valid currencies that are traded internationally.
The new generation of cryptocurrencies were successful because they made practical compromises or, to put it another way, they did not try to force all of the properties of real-world cash onto digital currencies. For example, a digital currency like Bitcoin does not work offline, but it does cleverly use decentralization to provide robustness so that if parts of the system are offline, the rest of the system can continue to function. Another major paradigm shift was the realization that one did not need to create a digital object corresponding to a coin or to cash, but rather that the money could be virtual and one only needed to keep track of the trading and exchange of this virtual money. That is, bookkeeping was what was important, not the coin itself! And, once we stop trying to create a digital object that acts like cash, then it becomes a lot easier to achieve properties like preventing double spending and being able to divide coins into smaller parts. Another paradigm shift was the realization that, rather than tying the value of the currency to real-world currency, one could tie the value to solving a hard (but not impossible) computational problem and make the solution of that computational problem have value in the new currency.
In order to understand how these new cryptocurrencies work, let’s take a look at the basic structures being used in Bitcoin. Many of the other currencies use similar building blocks for their design.
As we just mentioned, one of the major changes between Bitcoin and older digital cash schemes is a move away from a digital object representing a coin and instead to the use of an imaginary or virtual currency that is kept track of in bookkeeping records. These ledgers, as they are known, record the creation and exchange of these virtual coins. The use of cryptography to form secure ledgers has become a growing trend that has impacted many applications beyond the support of cryptocurrencies.
Let us look at a simple ledger. To start, assume that we have a central bank, which we call BigBank. Later we shall remove BigBank, but for now it is useful to have BigBank around in order to start the discussion. We suppose that BigBank can make new virtual coins, and that it wants to share them with Alice. To do so, it makes a ledger that records these events, something that looks like:
This ledger can be shared with Alice or anyone else, and after looking at it, one can easily figure out the sequence of events that happened. BigBank made 100 coins, and then these coins were given by BigBank to Alice. Assuming there have not been any other transactions, then one can conclude that Alice is now the owner of 100 virtual coins.
Of course, Alice should not just trust this ledger. Anyone could have made this ledger, pretending to be BigBank. As it is, there is nothing tying this ledger to its creator. To solve this problem, we must use digital signatures. Therefore, BigBank has a public–private key pair and has shared its public key with Alice and the rest of the world. There are many ways in which this could have been done, but it is simplest to think of some certificate authority issuing a certificate containing BigBank’s public key credentials. Now, rather than share only the Ledger, BigBank also shares .
This makes it harder for someone to pretend to be BigBank, but we still have some of the usual problems that we encountered in Chapter 15. For example, one problem that stands out is that it is possible to perform a replay attack. If Alice receives five separate copies of the signed Ledger, then she might conclude that she has 500 coins. Therefore, in order to fix this we need to use some form of unique transaction identifier or counter in Ledger before BigBank signs it, something like:
Now, the signed Ledger allows anyone to have some trust that BigBank has given Alice 100 coins.
Alice’s coins aren’t much use to her unless she can spend them. Suppose she goes to Sarah’s Store, which is known to accept BigBank’s coins, and wants to buy an item using BigBank’s coins. Alice could write another entry in the ledger by giving Sarah’s Store an update to the ledger that looks like
Now, Alice signs this update and sends Sarah’s Store . Sarah’s Store can indeed verify that Alice made this update, but a problem arises: How does Sarah’s Store know that Alice in fact had 100 coins to spend?
A simple way to take care of this is for Alice to attach a signed copy of BigBank’s original Ledger. Now Sarah’s Store can verify that Alice did get 100 coins from BigBank.
This works, but we come to classic double spending problem that all digital currencies must address. Right now, there is nothing preventing Alice from going to Vendor Veronica and spending those 100 coins again. Alice can simply make another version of Ledger 2, call it Ledger :
She can sign Ledger and send both it and BigBank’s signed copy of the original Ledger 1. But even with these, there is no way for Vendor Veronica to be assured that Alice did not spend her 100 coins elsewhere, which she in fact did.
What is needed is a way to solve the double spending problem. We can do this by slightly revising the purchasing protocol to place BigBank in a more central role. Rather than let Alice announce her transaction, we require Alice to contact BigBank when she makes purchases and BigBank to announce updates to the ledger. The ledger of transactions operates as an append-only ledger in that the original ledger with all of its updates contains the history of all transactions that have ever occurred using BigBank’s coins. Since it is not possible to remove transactions from the ledger, it is possible for everyone to see if double spending occurs simply by keeping track of the ledger announced by BigBank.
Ensuring that this append-only ledger is cryptographically protected requires that each update is somehow tied to previous versions of the ledger updates, otherwise it might be possible for transactions to be altered. The construction of the append-only ledger is built using a cryptographic data structure known as a blockchain, which we introduced in Section 12.7. A blockchain consists of two different forms of hash-based data structures. First, it uses hash pointers in a hash chain, and second it uses a Merkle tree to store transactions. The use of the hash chain in the blockchain makes the ledger tamper-proof. If an adversary attempts to tamper with data that is in block , then the hash contained in block , namely, the hash of the correct block , will not match the hash of the altered block . The use of the Merkle tree provides an efficient means of determining whether a transaction belongs to a block.
Now suppose that BigBank wants to keep track of all of the transactions that have taken place and publish it so that others can know these transactions. BigBank could periodically publish blocks, or it could gather an entire collection of blocks and publish them all at once. Ultimately, BigBank publishes all of the blocks of the blockchain along with a final hash pointer that certifies the entire collection of blocks in the blockchain. Each of the blocks in the blockchain, along with the final hash pointer, is signed by BigBank so that others know the identity of the entity publishing the ledger. Thus, the hash chaining in the blockchain provides data integrity, while the digital signatures applied to each block provide origin authentication.
Let us return to the story of BigBank and all of the various participants who might want to use or acquire some of BigBank’s digital coins. Everyone wishing to use some of their BigBank coins must tell BigBank how many coins or how much value is being transferred for a purchase and whom the coins will be given to. BigBank does not need to know what is being purchased, or why there is a transaction occurring; it just needs to know the value of the exchange and the recipient. BigBank will then gather several of these individual transactions into a single block, and publish that block as an update to the blockchain that represents BigBank’s ledger.
To do this, though, BigBank needs a way to describe the transactions in the ledger, and this means that there needs to be a set of basic transaction types that are allowed. It turns out that we don’t need very many to have a useful system. For example, it is useful for an entity like BigBank to create coins, where each coin has a value, a serial number, and a recipient. BigBank can, in fact, create as many coins as it wants during the time before it publishes the next block in the blockchain. For example, suppose BigBank creates three coins with different recipients, and that it publishes the creation of these coins in the data portion of a block that looks like:
When this block is published, it informs the rest of the world that BigBank created several coins of different values. First, in transaction 0 for block 76, a coin worth 5 units was created and given to (which, as will be explained shortly, stands for BigBank’s Public Key). This coin will be referred to as coin 76(0) since it was made in block 76, transaction 0. Similarly, a coin was created in transaction 1 of Block 76 for , which will be referred to as coin 76(1). Lastly, a coin was created in transaction 2 for , and will be referred to as coin 76(2). Who or what are , , and ? This is an example of something clever that Bitcoin borrowed from other applications of cryptography. In [HIP], it was recognized that the notion of one’s identity is best tied to something that only that entity should possess. This is precisely what a private key provides in public key cryptography, and so if someone wants to send something to Alice or Bob, then they can use Alice or Bob’s public key (which they know because the public key is public) as the name of the recipient.
In order to support purchases made by Alice and others, we need to allow for the users of BigBank currency to consume coins in exchange for goods, and to receive change for their transactions. It was realized that it was easier to simply destroy the original coin and issue new coins as payment to the vendors and new coins as change to those making purchases than it was to break a coin down into smaller denominations, which had proven difficult to accomplish for previous digital coin schemes. So, if Bob has a coin 76(2) worth 10 units, and he wants to buy something worth 7 units, BigBank destroys 76(2), issues a new coin to the vendor worth 7 units, and issues a new coin to Bob worth 3 units.
Of course, BigBank is big and might need to handle many transactions during the time it takes to form a block. Therefore, BigBank needs to hear from all of its customers about their purchases, and it needs to make certain that the coins that its customers are spending are valid and not previously spent. It also needs to create new coins and give the new coins to those who are selling to BigBank’s customers, while also issuing new coins as change. Let us look at how this could work. Suppose Alice has a coin 41(2) that is worth 20 units, and that she wants to buy a widget from Sarah’s Store for 15 units. Then, the first step that BigBank takes is to destroy Alice’s coin worth 20 units, then issue a new coin to Sarah’s Store worth 15 units and a new coin to Alice worth 5 units, which corresponds to Alice’s change from her purchase. Thus, one coin was consumed and two were created, but the total value of the coins being consumed is equal to the total value of the coins being created from the transaction. Lastly, before BigBank can publish its update to the ledger, it needs to get all of the owners of the coins that were consumed to sign the transaction, thereby ensuring that they knowingly spent their coins. Any good cryptographic signature scheme would work. Bitcoin uses the Elliptic Curve Digital Signature Algorithm (ECDSA), which we will see in Exercise 24 in Chapter 21.
The next block of transactions in the blockchain thus looks like:
BigBank’s approach to managing its ledger of transactions seems to work well, but it has one major problem, and that is the fact that BigBank has to be involved in every transaction. Not only does this mean a lot of work for BigBank, but it also leads to several security concerns, such as the risk of a denial of service to the system if BigBank becomes disconnected from the rest of the network, or the risk that BigBank might use its influence as the central entity in the economy to extract extortion from its members. Although BigBank cannot create false transactions, it could ask for large service fees in order to carry out a transaction.
Bitcoin gets around the problem of a central bank vetting each and every transaction by making the blockchain protocol decentralized. In other words, rather than having a single entity check the transactions, Bitcoin uses the notion of community consensus to check that the transactions are correct. Also, rather than have a single entity maintain and update the ledger of transactions, Bitcoin allows the members of the community to form the next block in the blockchain and thereby update the ledger themselves. In fact, Bitcoin went further in that it did not trust banks to mint coins themselves. Instead, Bitcoin devised a means by which coins get created periodically and are awarded to those who do work to keep the Bitcoin system moving smoothly.
In order to understand this, we need to set the stage for how life in the Bitcoin universe works. Every units of time (in Bitcoin, ), the users in the Bitcoin network must agree on which transactions were broadcast by the users making purchases and trading in bitcoins. Also, the nodes must agree on the order of the transactions, which is particularly important when money is being exchanged for payments. Bitcoin then proceeds in rounds, with a new update to the ledger being published each round.
Every user in the Bitcoin network keeps track of a ledger that contains all of the blocks that they’ve agreed upon. Every user also keeps a list of transactions that they have heard about but have not yet been added to the blockchain. In particular each user might have slightly different versions of this list, and this might happen for a variety of reasons. For example, one user might have heard about a transaction because it is located near that transaction, while another user might be further away and the announcement of the transaction might not have made it across the network to that user yet. Or, there might be malicious users that have announced incorrect transactions and are trying to put wrong transactions into the blockchain.
Bitcoin uses a consensus protocol to work out which transactions are most likely correct and which ones are invalid. A consensus protocol is essentially a way to tally up how many users have seen and believe in a transaction. In each round of the consensus protocol, new transactions are broadcast to all of the users in the Bitcoin network. These transactions must be signed by those spending the coins involved. Each user collects new transactions and puts the transactions it believes into a new block. In each round, a random user will get to announce its block to all of the other users. Bitcoin uses a clever way to determine which user will be this random user, as we shall soon discuss. All of the users that get this new block then check it over and, if they accept it, they include its hash in the next block that they create. Bitcoin uses the SHA-256 hash function for hashing the block and forming the hash pointer.
Before we proceed to discuss how Bitcoin randomly chooses which user will be chosen to announce its block, we look at some security concerns with this protocol. First is the concern that Alice could attempt to double spend a coin. In each round, coins are being created and destroyed, and this is being logged into the ledger. Since the ledger is being announced to the broader community, other users know which coins have been spent previously and thus they will not accept a transaction where Alice tries to spend a coin she already used. Much of the trustworthiness of Bitcoin’s system is derived from the fact that a consensus protocol will arrive at what most users believe is correct and, as long as most users are honest, things will work as they should with false transactions being filtered before they get into the blockchain.
Another concern is that Alice could write transactions to give herself coins from someone else. This is not possible because, in order to do so, Alice would have to write a transaction as if she was another user, and this would require her to create a valid signature for another user, which she is unable to do.
A different concern is whether Alice could prevent a transaction from Bob from being added to the ledger. Although she could choose not to include Bob’s transaction in a block she is creating, there is a good chance that one of the other users in the network will announce a block with Bob’s transaction because the user that is chosen to announce its block is randomly chosen. Additionally, even if she is chosen to announce her block, a different honest user will likely be chosen during the next round and will simply include Bob’s transaction. In Bitcoin, there isn’t any problem with a transaction taking a few rounds to get into the blockchain.
We now return to the question of how nodes are randomly selected. Remember that we needed to remove the role of a central entity like BigBank, and this means that we cannot rely on a central entity to determine which user will be randomly selected. Bitcoin needed a way to choose a user randomly without anyone controlling who that user would be. At the same time, we can’t just allow users to decide selfishly that they are the one chosen to announce the next block – that could lead to situations where a malicious user purposely leaves another user’s transactions out of the blockchain.
Bitcoin also needed a way to encourage users to want to participate by being the random node that announces a new block. The design of Bitcoin allows users who create blocks to be given a reward for making a block. This reward is known as a block reward, and is a fixed amount defined by the Bitcoin system. Additionally, when a user engages in a transaction that it wants logged into the blockchain ledger, it can set aside a small amount of value from the transaction (taking a little bit from the change it might issue itself, for example), and give this amount as a service charge to the user that creates and adds the block containing the transaction into the blockchain.
Together, these two incentives encourage users to want to make the block, so the challenge then is ensuring that random users are selected. Bitcoin achieves this by requiring that users perform a task where it isn’t guaranteed which user will be the first to complete the task. The first user to complete the task will have performed the proof of work needed to be randomly selected. Users compete with each other using their computing power, and the likelihood that a user will be selected ”randomly” depends on how much computing power they have. This means that if Alice has twice the computing power that Bob has, then she will be twice as likely to be the first to complete the computing task. But, even with this advantage, Alice does not control whether she will finish the task first, and so Bob and other users have a chance to publish the block before she does.
Bitcoin’s proof of work requires that users solve a hash puzzle in order to be able to announce a block to be added to the blockchain. Specifically, a user who wishes to propose a block for adding the community’s ledger is required to find a nonce such that
where is the nonce to be found, is the hash of the previous block in the blockchain, are the transactions that the user is proposing to add to the ledger, and is a threshold value for the hash puzzle. What this means is the hash value is interpreted as an integer in binary and there must be a certain number of 0s at the beginning of this binary expansion. The value of is chosen so that it will take roughly units of time for some user to solve the puzzle. In Bitcoin, , and every two weeks the Bitcoin protocol adjusts so that the average time needed to solve a hash puzzle remains around . Bitcoin uses the SHA-256 hashing algorithm twice for the proof of work cryptopuzzle, once for the hash of the previous ledger and once for obtaining the desired small hash value. In practice, the user does a massive computation using numerous nonces, hoping to be the first to find an appropriate hash value.
Once a user finds this nonce, it forms the new block, which includes , the previous hash, and the transactions it has included. Then the user announces the block to the entire community. The reward for completing this task first is that the user will receive the block reward, which is several bitcoins, as well as transaction fees. The process of solving the hash puzzle and announcing a block is called Bitcoin mining, and has become a popular activity with many companies devoting large amounts of computing (and electrical power) to solving puzzles and reaping the Bitcoin rewards.
Let’s now put everything together and describe what a typical Bitcoin blockchain looks like. A block in Bitcoin’s blockchain consists of a header followed by a collection of transactions that have been organized as a Merkle tree. The header itself contains several different pieces of information: the hash pointer to the previous block in the blockchain, a timestamp, the root of the Merkle tree, and the nonce. In actual implementation of Bitcoin, it is only the hash of the header that must satisfy the conditions of the cryptographic puzzle, rather than the hash of the full block of information. This makes it easier to verify a chain of blocks since one only needs to examine the hash of the headers, and not the hashes of many, many transactions. We may therefore visualize the Bitcoin blockchain as illustrated in Figure 16.1.
Our description of Bitcoin is meant to convey the ideas behind cryptocurrencies in general and thus greatly simplifies the details of the actual Bitcoin protocol. We have avoided describing many practical matters, such as the formal specification of the language used by Bitcoin to specify transactions, the data structures and housekeeping that are needed to keep track of transactions, as well as the details behind the use of the hash of a public key for addressing users. The reader can look at [Bitcoin] or [Narayanan et al.] to find the details behind Bitcoin and its protocols.
In the scheme of Section 16.2, show that a valid coin satisfies the verification equations
In the scheme of Section 16.2, a hacker discovers the Bank’s secret number . Show how coins can be produced and spent without having an account at the bank.
In the scheme of Section 16.2, the numbers and are powers of , but the exponents are supposed to be hard to find. Suppose we take .
Show that if the Spender replaces with such that , then the verification equations still work.
Show how the Spender can double spend without being identified.
Suppose that, in the scheme of Section 16.2, the coin is represented only as ; for example, by ignoring and , taking the hash function to be a function of only , and ignoring the verification equation . Show that the Spender can change the value of to any desired number (without informing the Bank), compute a new value of , and produce a coin that will pass the two remaining verification equations.
In the scheme of Section 16.2, if the Spender double spends, once with the Merchant and once with the Vendor, why is it very likely that (where are as in the discussion of Fraud Control)?
A Sybil attack is one in which an entity claims multiple identities or roles in order to achieve an advantage against a system or protocol. Explain why there is no advantage in launching a Sybil attack against Bitcoin’s proof of work approach to determining which user will randomly be selected to announce the next block in a blockchain.
Forgetful Fred would like to save a file containing his homework to a server in the network, which will allow him to download it later when he needs it. Describe an approach using hash functions that will allow Forgetful Fred to verify that he has obtained the correct file from the server, without requiring that Fred keep a copy of the entire file to check against the downloaded file.
A cryptographic hash puzzle involves finding a nonce that, when combined with a message , will hash to an output that belongs to a target set , i.e. .
Assuming that all outputs of a cryptographic hash function are equally likely, find an expression for the average amount of nonces that need to be tried in order to satisfy
Using your answer from part (a), estimate how many hashes it takes to obtain a hash value where the first 66 binary digits are 0s.
Imagine, if you will, that you have made billions of dollars from Internet stocks and you wish to leave your estate to relatives. Your money is locked up in a safe whose combination only you know. You don’t want to give the combination to each of your seven children because they are less than trustworthy. You would like to divide it among them in such a way that three of them have to get together to reconstruct the real combination. That way, someone who wants some of the inheritance must somehow cooperate with two other children. In this chapter we show how to solve this type of problem.
The first situation that we present is the simplest. Consider the case where you have a message , represented as an integer, that you would like to split between two people Alice and Bob in such a way that neither of them alone can reconstruct the message . A solution to this problem readily lends itself: Give Alice a random integer and give Bob . In order to reconstruct the message , Alice and Bob simply add their pieces together.
A few technical problems arise from the fact that it is impossible to choose a random integer in a way that all integers are equally likely (the sum of the infinitely many equal probabilities, one for each integer, cannot equal 1). Therefore, we choose an integer larger than all possible messages that might occur and regard and as numbers mod . Then there is no problem choosing as a random integer mod ; simply assign each integer mod the probability .
Now let us examine the case where we would like to split the secret among three people, Alice, Bob, and Charles. Using the previous idea, we choose two random numbers and mod and give to Alice, to Bob, and to Charles. To reconstruct the message , Alice, Bob, and Charles simply add their respective numbers.
For the more general case, if we wish to split the secret among people, then we must choose random numbers mod and give them to of the people, and to the remaining person.
In the previous section, we showed how to split a secret among people so that all were needed in order to reconstruct the secret. In this section we present methods that allow a subset of the people to reconstruct the secret.
It has been reported that the control of nuclear weapons in Russia employed a safety mechanism where two out of three important people were needed in order to launch missiles. This idea is not uncommon. It’s in fact a plot device that is often employed in spy movies. One can imagine a control panel with three slots for keys and the missile launch protocol requiring that two of the three keys be inserted and turned at the same time in order to launch missiles to eradicate the earth.
Why not just use the secret splitting scheme of the previous section? Suppose some country is about to attack the enemy of the week, and the secret is split among three officials. A secret splitting method would need all three in order to reconstruct the key needed for the launch codes. This might not be possible; one of the three might be away on a diplomatic mission making peace with the previous week’s opponent or might simply refuse because of a difference of opinion.
Let be positive integers with . A -threshold scheme is a method of sharing a message among a set of participants such that any subset consisting of participants can reconstruct the message , but no subset of smaller size can reconstruct .
The -threshold schemes are key building blocks for more general sharing schemes, some of which will be explored in the Exercises for this chapter. We will describe two methods for constructing a -threshold scheme.
The first method was invented in 1979 by Shamir and is known as the Shamir threshold scheme or the Lagrange interpolation scheme. It is based upon some natural extensions of ideas that we learned in high school algebra, namely that two points are needed to determine a line, three points to determine a quadratic, and so on.
Choose a prime , which must be larger than all possible messages and also larger than the number of participants. All computations will be carried out mod . The prime replaces the integer of Section 17.1. If a composite number were to be used instead, the matrices we obtain might not have inverses.
The message is represented as a number mod , and we want to split it among people in such a way that of them are needed to reconstruct the message. The first thing we do is randomly select integers mod ; call them . Then the polynomial
is a polynomial such that . Now, for the participants, we select distinct integers and give each person a pair with . For example, is a reasonable choice for the ’s, so we give out the pairs , one to each person. The prime is known to all, but the polynomial is kept secret.
Now suppose people get together and share their pairs. For simplicity of notation, we assume the pairs are . They want to recover the message .
We begin with a linear system approach. Suppose we have a polynomial of degree that we would like to reconstruct from the points , where . This means that
If we denote , then we may rewrite this as
The matrix, let’s call it , is what is known as a Vandermonde matrix. We know that this system has a unique solution mod if the determinant of the matrix is nonzero mod (see Section 3.8). It can be shown that the determinant is
which is zero mod only when two of the ’s coincide mod (this is where we need to be prime; see Exercise 13(a) in Chapter 3). Thus, as long as we have distinct ’s, the system has a unique solution.
We now describe an alternative approach that leads to a formula for the reconstruction of the polynomial and hence for the secret message. Our goal is to reconstruct a polynomial given that we know of its values . First, let
Here, we work with fractions mod as described in Section 3.3. Then
This is because is a product of factors , all of which are 1. When , the product for contains the factor , which is 0.
The Lagrange interpolation polynomial
satisfies the requirement for . For example,
We know from the previous approach with the Vandermonde matrix that the polynomial is the only one of degree that takes on the specified values. Therefore, .
Now, to reconstruct the secret message all we have to do is calculate and evaluate it at . This gives us the formula
Let’s construct a -threshold scheme. We have eight people and we want any three to be able to determine the secret, while two people cannot determine any information about the message.
Suppose the secret is the number (which corresponds to the word secret). Choose a prime , for example, (we need a prime at least as large as the secret, but there is no advantage in using primes much larger than the maximum size of the secret). Choose random numbers and mod and form the polynomial
For example, let’s work with
We now give the eight people pairs . There is no need to choose the values of randomly, so we simply use . Therefore, we distribute the following pairs, one to each person:
Suppose persons 2, 3, and 7 want to collaborate to determine the secret. Let’s use the Lagrange interpolating polynomial. They calculate that the following polynomial passes through their three points:
At this point they realize that they should have been working mod . But
so they replace 1/5 by , as in Section 3.3, and reduce mod to obtain
This is, of course, the original polynomial . All they care about is the constant term 190503180520, which is the secret. (The last part of the preceding calculations could have been shortened slightly, since they only needed the constant term, not the whole polynomial.)
Similarly, any three people could reconstruct the polynomial and obtain the secret.
If persons 2, 3, and 7 chose the linear system approach instead, they would need to solve the following:
This yields
so they again recover the polynomial and the message.
What happens if only two people get together? Do they obtain any information? For example, suppose that person 4 and person 6 share their points (4, 442615222255) and (6, 852136050573) with each other. Let be any possible secret. There is a unique quadratic polynomial passing through the points , (4, 442615222255), and (6, 852136050573). Therefore, any secret can still occur.
Similarly, they cannot guess the share held, for example, by person 7: Any point yields a unique secret , and any secret yields a polynomial , which corresponds to . Therefore, every value of can occur, and each corresponds to a secret. So persons 4 and 6 don’t obtain any additional information about which secrets are more likely when they have only their own two points.
Similarly, if we use a polynomial of degree , there is no way that persons can obtain information about the message with only their data. Therefore, people are required to obtain the message.
For another example, see Example 38 in the Computer Appendices.
There are other methods that can be used for secret sharing. We now describe one due to Blakley, also from 1979. Suppose there are several people and we want to arrange that any three can find the secret, but no two can. Choose a prime and let be the secret. Choose randomly mod . We therefore have a point in three-dimensional space mod . Each person is given the equation of a plane passing through . This is accomplished as follows. Choose randomly mod and then set . The plane is then
This is done for each person. Usually, three planes will intersect in a point, which must be . Two planes will intersect in a line, so usually no information can obtained concerning the secret (but see Exercise 13).
Note that only one coordinate should be used to carry the secret. If the secret had instead been distributed among all three coordinates , then there might be only one meaningful message corresponding to a point on a line that is the intersection of two persons’ planes.
The three persons who want to deduce the secret can proceed as follows. They have three equations
which yield the matrix equation
As long as the determinant of this matrix is nonzero mod , the matrix can be inverted mod and the secret can be found (of course, in practice, one would tend to solve this by row operations rather than by inverting the matrix).
Let . Suppose we give A, B, C, D, E the following planes:
If A, B, C want to recover the secret, they solve
The solution is , so the secret is . Similarly, any three of A, B, C, D, E can cooperate to recover .
By using -dimensional hyperplanes in -dimensional space, we can use the same method to create a -threshold scheme for any values of and .
As long as is reasonably large, it is very likely that the matrix is invertible, though this is not guaranteed. It would not be hard to arrange ways to choose so that the matrix is always invertible. Essentially, this is what happens in the Shamir method. The matrix equations for both methods are similar, and the Shamir method could be regarded as a special case of the Blakley method. But since the Shamir method yields a Vandermonde matrix, the equations can always be solved. The other advantage of the Shamir method is that it requires less information to be carried by each person: versus .
We now return to the Shamir method and consider variations of the basic situation. By giving certain persons more shares, it is possible to make some people more important than others. For example, suppose we have a system in which eight shares are required to obtain the secret, and suppose the boss is given four shares, her daughters are given two shares, and the other employees are each given one share. Then the boss and two of her daughters can obtain the secret, or three daughters and two regular employees, for example.
Here is a more complicated situation. Suppose two companies A and B share a bank vault. They want a system where four employees from A and three from B are needed in order to obtain the secret combination. Clearly it won’t work if we simply supply shares that are all for the same secret, since one company could simply use enough partial secrets from its employees that the other company’s shares would not be needed. The following is a solution that works. Write the secret as the sum of two numbers . Now make into a secret shared among the employees of A as the constant term of a polynomial of degree 3. Similarly, let be the constant term of a polynomial of degree 2 and use this to distribute shares of among the employees of B. If four employees of A and three employees of B get together, then those from A determine and those from B determine . Then they add and to get .
Note that does not obtain any information about the secret by itself since has a unique solution for every , so every possible value of corresponds to a possible value of . Therefore, knowing does not help to find the secret; also needs to know .
Suppose you have a secret, namely 5. You want to set up a system where four persons A, B, C, D are given shares of the secret in such a way that any two of them can determine the secret, but no one alone can determine the secret. Describe how this can be done. In particular, list the actual pieces of information (i.e., numbers) that you could give to each person to accomplish this.
Persons , , participate in a Shamir secret sharing scheme. They work mod 11. receives the share , receives , and receives .
Show that at least one of the three shares is incorrect.
Suppose and have correct shares. Find the secret.
You set up a (2, 30) Shamir threshold scheme, working mod the prime 101. Two of the shares are (1,13) and (3,12). Another person received the share (2, *), but the part denoted by * is unreadable. What is the correct value of * ?
You set up a (2, 10) Shamir threshold scheme, working mod the prime 73. Two of the shares are (1, 10) and (2, 18). A third share is (5, *). What is *?
In a Shamir secret sharing scheme with modulus , the following were given to Alice, Bob, and Charles: , , . Calculate the corresponding Lagrange interpolating polynomial, and identify the secret.
In a Shamir secret sharing scheme, the secret is the constant term of a degree 4 polynomial mod the prime 1093. Suppose three people have the secrets (2, 197), (4, 874), and (13, 547). How many possibilities are there for the secret? (Note: We assume that .)
Mark doesn’t like mods, so he wants to implement a Shamir secret sharing scheme without them. His secret is (a positive integer) and he gives person the share for a positive integer that he randomly chooses. Bob receives the share . Describe how Bob can narrow down the possibilities for and determine what values of are possible.
A key distributor uses a -threshold scheme to distribute a combination to an electronic safe to 20 participants.
What is the smallest number of participants needed to open the safe, given that one unknown participant is a cheater who will reveal a random share?
If they are only allowed to try one combination (if they are wrong the electronic safe shuts down permanently), then how many participants are necessary to open the safe? (Note: This one is a little subtle. A majority vote actually works with four people, but you need to show that a tie cannot occur.)
A certain military office consists of one general, two colonels, and five desk clerks. They have control of a powerful missile but don’t want the missile launched unless the general decides to launch it, or the two colonels decide to launch it, or the five desk clerks decide to launch it, or one colonel and three desk clerks decide to launch it. Describe how you would do this with a secret sharing scheme. (Hint: Try distributing the shares of a (10, 30) Shamir scheme.)
Suppose there are four people in a room, exactly one of whom is a foreign agent. The other three people have been given pairs corresponding to a Shamir secret sharing scheme in which any two people can determine the secret. The foreign agent has randomly chosen a pair. The people and pairs are as follows. All the numbers are mod 11.
Determine who the foreign agent is and what the message is.
Consider the following situation: Government A, Government B, and Government C are hostile to each other, but the common threat of Antarctica looms over them. They each send a delegation consisting of 10 members to an international summit to consider the threat that Antarctica’s penguins pose to world security. They decide to keep a watchful eye on their tuxedoed rivals. However, they also decide that if the birds get too rowdy, then they will launch a full-force attack on Antarctica. Using secret sharing techniques, describe how they can arrange to share the launch codes so that it is necessary that three members from delegation A, four members from delegation B, and two members from C cooperate to reconstruct the launch codes.
This problem explores what is known as the Newton form of the interpolant. In the Shamir method, we presented two methods for calculating the interpolating polynomial. The system of equations approach is difficult to solve and easy to evaluate, while with the Lagrange approach it is quite simple to determine the interpolating polynomial but becomes a labor to evaluate. The Newton form of the interpolating polynomial comes from choosing as a basis. The interpolating polynomial is then . Show that we can solve for the coefficients by solving a system . What special properties do you observe in the matrix ? Why does this make the system easier to solve?
In a Blakley scheme, suppose persons A and B are given the planes and . Show that they can recover the secret without a third person.
Alice, Bob, and Charles have each received shares of a secret that was split using the secret splitting scheme described in Section 17.1. Suppose that . Alice is given the share , Bob is given the share , and Charles is given the share . Determine the secret .
For a Shamir (4,7) secret sharing scheme, take and let the shares be
Take a set of four shares and find the secret. Now take another set of four shares and verify that the secret obtained is the same.
Alice, Bob, Charles, and Dorothy use a (2, 4) Shamir secret sharing scheme using the prime . Suppose that Alice gets the share (38, 358910), Bob gets the share (3876, 9612), Charles gets the share (23112, 28774), and Dorothy gets the share (432, 178067). One of these shares was incorrectly received. Determine which one is incorrect, and find the secret.
Alice is living in Anchorage and Bob is living in Baltimore. A friend, not realizing that they are no longer together, leaves them a car in his will. How do they decide who gets the car? Bob phones Alice and says he’ll flip a coin. Alice chooses “Tails” but Bob says “Sorry, it was Heads.” So Bob gets the car.
For some reason, Alice suspects Bob might not have been honest. (Actually, he told the truth; as soon as she called tails, he pulled out his specially made two-headed penny so he wouldn’t have to lie.) She resolves that the next time this happens, she’ll use a different method. So she goes to her local cryptologist, who suggests the following method.
Alice chooses two large random primes and , both congruent to 3 mod 4. She keeps them secret but sends the product to Bob. Then Bob chooses a random integer and computes . He keeps secret but sends to Alice. Alice knows that has a square root mod (if it doesn’t, her calculations will reveal this fact, in which case she accuses Bob of cheating), so she uses her knowledge of and to find the four square roots of (see Section 3.9). One of these will be , but she doesn’t know which one. She chooses one at random (this is the “flip”), say , and sends it to Bob. If , Bob tells Alice that she wins. If , Bob wins.
But, asks Alice, how can I be sure Bob doesn’t cheat? If Alice sends to Bob and , then Bob knows all four square roots of , so he can factor . In particular, gives a nontrivial factor of . Therefore, if it is computationally infeasible to factor , the only way Bob could produce the factors and would be when his value of is not plus or minus the value of that Alice sends. If Alice sends Bob , Bob has no more information than he had when Alice sent him the number . Therefore, he should not be able to produce and in this case. So Alice can check that Bob didn’t cheat by asking Bob for the factorization of .
What if Alice tries to cheat by sending Bob a random number rather than a square root of ? This would surely prevent Bob from factoring . Bob can guard against this by checking that the square of the number Alice sends is congruent to .
Suppose Alice tries to deceive Bob by sending a product of three primes. Of course, Bob could ask Alice for the factorization of at the end of the game; if Alice produces two factors, they can be quickly checked for primality. But Bob shouldn’t worry about this possibility. When is the product of three distinct primes, there are eight square roots of . Therefore, up to sign there are four choices of numbers for Alice to send. Each of the three wrong choices will allow Bob to find a nontrivial factor of . So Alice would decrease her chances of winning to only one in four. Therefore, she should not try this.
There is one flaw in this procedure. Suppose Bob decides he wants to lose. He can then claim his value of was exactly the value that Alice sent him. Alice cannot dispute this since the only information she has is the square of Bob’s number, which is congruent to the square of her number. There are other procedures that can prevent Bob from trying to lose, but we will not discuss them here.
Finally, we should mention that it is not difficult to find primes and that are congruent to 3 mod 4. The density of primes congruent to 1 mod 4 is the same as the density of primes that are 3 mod 4. Therefore, find a random prime . If it is not 3 mod 4, try another. This process should succeed quickly. We can find similarly.
Alice chooses
She sends
to Bob. Bob takes
(this isn’t as random as it looks; but Bob thinks the decimal expansions of square roots look random) and computes
which he sends to Alice.
Alice computes
Therefore, she knows that
The Chinese remainder theorem puts these together in four ways to yield
Suppose Alice sends 1012103737618676889 to Bob. This is , so Bob declares Alice the winner.
Suppose instead that Alice sends 937850352623334103 to Bob. Then Bob claims victory. By computing
he can prove that he won.
Alice and Bob quickly tire of flipping coins over the telephone and decide to try poker. Bob pulls out his deck of cards, shuffles, and deals two hands, one for Alice and one for himself. Now what does he do? Alice won’t let him read the cards to her. Also, she suggests that he might not be playing with a full deck. Arguments ensue. But then someone suggests that they each choose their own cards. The betting is fast and furious. After several hundred coins (they remain unused from the coin-flipping protocol) have been wagered, Alice and Bob discover that they each have a royal flush. Each claims the other must have cheated. Fortunately, their favorite cryptologist can help.
Here is the method she suggests, in nonmathematical terms. Bob takes 52 identical boxes, puts a card in each box, and puts a lock on each one. He dumps the boxes in a bag and sends them to Alice. She chooses five boxes, puts her locks on them, and sends them back to Bob. He takes his locks off and sends the five boxes back to Alice, who takes her locks off and finds her five cards. Then she chooses five more boxes and sends them back to Bob. He takes off his locks and gets his five cards. Now suppose Alice wants to replace three cards. She puts three cards in a discard box, puts on her lock, and sends the box to Bob. She then chooses three boxes from the remaining 42 card boxes, puts on her locks, and sends them to Bob. Bob removes his locks and sends them back to Alice, who removes her locks and gets the cards. If Bob wants to replace two cards, he puts them in another discard box, puts on his lock, and sends the box to Alice. She chooses two card boxes and sends them to Bob. He removes his locks and gets his cards. They then compare hands to see who wins. We’ll assume Alice wins.
After the hand has been played, Bob wants to check that Alice put three cards in her discard box since he wants to be sure she wasn’t playing with eight cards. He puts his lock on the box and sends the box to Alice, who takes her lock off. Since Bob’s lock is still on the box, she can’t change the contents. She sends the box back to Bob, who removes the lock and finds the three cards that Alice discarded (this differs from standard poker in that Bob sees the actual cards discarded; in a standard game, Bob only sees that Alice discards three cards and doesn’t need to look at them afterward). Similarly, Alice can check that Bob discarded two cards.
Bob can check that Alice played with the hand that was dealt by asking her to send her cards to him. Alice cannot change her hand since all the remaining cards still have Bob’s locks on them (and Bob can’t open them since Alice has them in her possession).
Of course, various problems arise if Alice or Bob unjustly accuses the other of cheating. But, ignoring such complications, we see that Alice and Bob can now play poker. However, the postage for sending 52 boxes back and forth is starting to cut into Alice’s profits. So she goes back to her cryptologist and asks for a mathematical implementation. The following is the method.
Alice and Bob agree on a large prime . Alice chooses a secret integer with , and Bob chooses a secret integer with . Alice computes such that and Bob computes with . A different and are used for each hand. A different could be used for each hand also.
Note that , and similarly for . This can be seen as follows: , so for some integer . Therefore, when
Trivially, we also have when .
The 52 cards are changed to 52 distinct numbers via some prearranged scheme. Bob computes for , randomly permutes these numbers, and sends them to Alice. Alice chooses five numbers , computes for , and sends these numbers to Bob. Bob takes off his lock by raising these numbers to the power and sends them to Alice, who removes her lock by raising to the power. This gives Alice her hand.
Alice then chooses five more of the numbers and sends them back to Bob, who removes his locks by raising the numbers to the power. This gives him his hand. The rest of the game proceeds in this fashion.
It seems to be quite difficult for Alice to deduce Bob’s cards. She could guess which encrypted card corresponds to a fixed unencrypted card . This means Alice would need to solve equations of the form for . Doing this for the 52 choices for would give at most 52 choices for . The correct exponent could then be determined by choosing another card and trying the various possibilities for to see which ones give the encrypted values that are on the list of encrypted cards. But these equations that Alice needs to solve are discrete logarithm problems, which are generally assumed to be difficult when is large (see Chapter 10).
Let’s consider a simplified game where there are only five cards: ten, jack, queen, king, ace. Each player is dealt one card. The winner is the one with the higher card. Change the cards to numbers using so we have the following:
Let the prime be . Alice chooses her secret and Bob chooses his secret . Alice computes and Bob computes . This can be done via the extended Euclidean algorithm. Just to be sure, Alice checks that , and Bob does a similar calculation with and .
Bob now calculates (congruences are mod )
He shuffles these numbers and sends them to Alice:
Since Alice does not know , it is unlikely she can deduce which card is which without a lot of computation.
Alice now chooses her card by choosing one of these numbers – for example, the fourth – raises it to the power , and sends it to Bob:
Bob takes off his lock by raising this to the power and sends it back to Alice:
Alice now removes her lock by raising this to the power :
Her card is therefore the ten.
Now Alice chooses Bob’s card by simply choosing one of the original cards she received – for example, 1507298770 – and sending it back to Bob. Bob computes
Therefore, his card is the jack.
This accomplishes the desired dealing of the cards. Alice and Bob now compare cards and Bob wins. To prevent cheating, Alice and Bob then reveal their secret exponents and . Suppose Alice tries to claim she has the king. Bob can quickly compute and show that the card he sent to Alice was the ten.
For another example of this game, see Example 39 in the Computer Appendices.
No game of poker would be complete without at least the possibility of cheating. Here’s how to do it in the present situation.
Bob goes to his local number theorist, who tells him about quadratic residues. A number is called a quadratic residue mod if the congruence has a solution; in other words, is a square mod . A nonresidue is an integer such that has no solution.
There is an easy way to decide whether or not a number is a quadratic residue or nonresidue:
(see Exercise 1 ). This determination can also be done using the Legendre or Jacobi symbol plus quadratic reciprocity. See Section 3.10.
Recall that we needed and . Therefore, and are odd. A card is encrypted to , and
since (with the same choice of signs on both sides of the congruence). Therefore, is a quadratic residue mod if and only if is a quadratic residue. The corresponding statement also applies to the and power of the cards.
When Alice sends Bob the five cards that will make up her hand, Bob quickly checks these cards to see which are quadratic residues and which are nonresidues. This means that there are two sets and , and for each of Alice’s cards, he knows whether the card is in or . This gives him a slight advantage. For example, suppose he needs to know whether or not she has the queen of hearts and he determines that it is in . If she has only one card, the chances are low that she has the card. In this way, Bob obtains a slight advantage and starts winning.
Alice quickly consults her local cryptologist, who fortunately knows about quadratic residues, too. Now when Alice chooses Bob’s hand, she arranges that all of his cards are in , for example. Then she knows that his hand is chosen from 26 cards rather than 52. This is better than the partial information that Bob has and is useful enough that she gains an advantage over Bob. Finally, Alice gets very bold. She sneakily chooses the prime so that the ace, king, queen, jack, and ten of spades are the only quadratic residues. When she chooses Bob’s hand, she gives him five nonresidues. She chooses the five residues for herself. Bob, who has been computing residues and nonresidues on each hand, has already been getting suspicious since his cards have all been residues or all been nonresidues for several hands. But now he sees before the hand is played that she has chosen a royal flush for herself. He accuses her of cheating, arguments ensue, and they go back to coin flipping.
Let’s return to the simplified example. The choice of prime was not random. In fact,
so only the ace is a nonresidue, while all the remaining cards are quadratic residues.
When Alice is choosing her hand, she computes
This tells her that the ace is 1112225809. She raises it to the power , then sends it to Bob. He raises it to the power and sends it back to Alice, who raises it to the power . Of course, she finds that her card is the ace.
For more on playing poker over the telephone, see [Fortune-Merritt].
Let be a primitive root for the prime . This means that the numbers yield all of the nonzero congruence classes mod .
Let be fixed and suppose has a solution . Show that must be even. (Hint: Write for some . Now use the fact that if and only if .) This shows that the nonzero squares mod are exactly , and therefore are the quadratic nonresidues mod .
Using the definition of primitive root, show that .
Use Exercise 15 in Chapter 3 to show that .
Let . Show that if is a quadratic residue and if is a quadratic nonresidue mod .
In the coin flipping protocol with , suppose Bob sends a number such that neither nor has a square root mod .
Show that cannot be a square both mod and mod . Similarly, cannot be a square mod both primes.
Suppose is not a square mod . Show that is a square mod .
Show that is a square mod one of the primes and is a square mod the other.
Benevolent Alice decides to correct Bob’s “mistake.” Suppose is a square mod and is a square mod . Alice calculates a number such that and and sends to Bob (there are two pairs of choices for ). Show how Bob can use this information to factor and hence claim victory.
Let be an odd prime. Show that if , then .
Let be an odd prime. Suppose and . Show that (Hint: Look at the proof of the Basic Principle in Section 9.3.)
Suppose Alice cheats when flipping coins by choosing . Show that Bob always loses in the sense that Alice always returns . Therefore, it is wise for Bob to ask for the two primes at the end of the game.
A few years ago, it was reported that some thieves set up a fake automatic teller machine at a shopping mall. When a person inserted a bank card and typed in an identification number, the machine recorded the information but responded with the message that it could not accept the card. The thieves then made counterfeit bank cards and went to legitimate teller machines and withdrew cash, using the identification numbers they had obtained.
How can this be avoided? There are several situations where someone reveals a secret identification number or password in order to complete a transaction. Anyone who obtains this secret number, plus some (almost public) identification information (for example, the information on a bank card), can masquerade as this person. What is needed is a way to use the secret number without giving any information that can be reused by an eavesdropper. This is where zero-knowledge techniques come in.
The basic challenge-response protocol is best illustrated by an example due to Quisquater, Guillou, and Berson [Quisquater et al.]. Suppose there is a tunnel with a door, as in Figure 19.1. Peggy (the prover) wants to prove to Victor (the verifier) that she can go through the door without giving any information to Victor about how she does it. She doesn’t even want to let Victor know which direction she can pass through the door (otherwise, she could simply walk down one side and emerge from the other). They proceed as follows. Peggy enters the tunnel and goes down either the left side or the right side of the tunnel. Victor waits outside for a minute, then comes in and stands at point B. He calls out “Left” or “Right” to Peggy. Peggy then comes to point B by the left or right tunnel, as requested. This entire protocol is repeated several times, until Victor is satisfied. In each round, Peggy chooses which side she will go down, and Victor randomly chooses which side he will request.
Since Peggy must choose to go down the left or right side before she knows what Victor will say, she has only a 50% chance of fooling Victor if she doesn’t know how to go through the door. Therefore, if Peggy comes out the correct side for each of 10 repetitions, there is only one chance in that Peggy doesn’t know how to go through the door. At this point, Victor is probably convinced, though he could try a few more times just to be sure.
Suppose Eve is watching the proceedings on a video monitor carried by Victor. She will not be able to use anything she sees to convince Victor or anyone else that she, too, can go through the door. Moreover, she might not even be convinced that Peggy can go through the door. After all, Peggy and Victor could have planned the sequence of rights and lefts ahead of time. By this reasoning, there is no useful information that Victor obtains that can be transmitted to anyone.
Note that there is never a proof, in a strict mathematical sense, that Peggy can go through the door. But there is overwhelming evidence, obtained through a series of challenges and responses. This is a feature of zero-knowledge “proofs.”
There are several mathematical versions of this procedure, but we’ll concentrate on one of them. Let be the product of two large primes. Let be a square mod with Recall that finding square roots mod is hard; in fact, finding square roots mod is equivalent to factoring (see Section 3.9). However, Peggy claims to know a square root of . Victor wants to verify this, but Peggy does not want to reveal . Here is the method:
Peggy chooses a random number and lets , so
(We may assume that , so exists; otherwise, Peggy has factored .) She computes
and sends and to Victor.
Victor checks that , then chooses either or and asks Peggy to supply a square root of it. He checks that it is an actual square root.
The first two steps are repeated several times, until Victor is convinced.
Of course, if Peggy knows , the procedure proceeds without problems. But what if Peggy doesn’t know a square root of ? She can still send Victor two numbers and with . If she knows a square root of and a square root of , then she knows a square root of . Therefore, for at least one of them, she does not know a square root. At least half the time, Victor is going to ask her for a square root she doesn’t know. Since computing square roots is hard, she is not able to produce the desired answer, and therefore Victor finds out that she doesn’t know .
Suppose, however, that Peggy predicts correctly that Victor will ask for a square root of . Then she chooses a random , computes , and lets . She sends and to Victor, and everything works. This method gives Peggy a 50% chance of fooling Victor on any given round, but it requires her to guess which number Victor will request each time. As soon as she fails, Victor will find out that she doesn’t know .
If Victor verifies that Peggy knows a square root, does he obtain any information that can be used by someone else? No, since in any step he is only learning the square root of a random square, not a square root of . Of course, if Peggy uses the same random numbers more than once, he could find out the square roots of both and and hence a square root of . So Peggy should be careful in her choice of random numbers.
Suppose Eve is listening. She also will only learn square roots of random numbers. If she tries to use the same sequence of random numbers to masquerade as Peggy, she needs to be asked for the square roots of exactly the same sequence of ’s and ’s. If Victor asks for a square root of an in place of an at one step, for example, Eve will not be able to supply it.
The preceding protocol requires several communications between Peggy and Victor. The Feige-Fiat-Shamir method reduces this number and uses a type of parallel verification. This then is used as the basis of an identification scheme.
Again, let be the product of two large primes. Peggy has secret numbers . Let (we assume ). The numbers are sent to Victor. Victor will try to verify that Peggy knows the numbers . Peggy and Victor proceed as follows:
Peggy chooses a random integer , computes and sends to Victor.
Victor chooses numbers with each . He sends these to Peggy.
Peggy computes and sends to Victor.
Victor checks that .
Steps 1 through 4 are repeated several times (each time with a different ).
Consider the case . Then Peggy is asked for either or . These are two random numbers whose quotient is a square root of . Therefore, this is essentially the same idea as the simplified scheme discussed previously, with quotients instead of products.
Now let’s analyze the case of larger . Suppose, for example, that Victor sends , and all other . Then Peggy must produce , which is a square root of . In fact, in each round, Victor is asking for a square root of a number of the form . Peggy can supply a square root if she knows . If she doesn’t, she will have a hard time computing a square root.
If Peggy doesn’t know any of the numbers (the likely scenario also if someone other than Peggy is pretending to be Peggy), she could guess the string of bits that Victor will send. Suppose she guesses correctly, before she sends . She lets be a random number and declares . When Victor sends the string of bits, Peggy sends back the value of . Of course, the verification congruence is satisfied. But if Peggy guesses incorrectly, she will need to modify her choice of , which means she will need some square roots of ’s.
For example, suppose Peggy is able to supply the correct response when , and all other . This could be accomplished by guessing the bits and using the preceding method of choosing . However, suppose Victor sends , and all other . Then Peggy will be ready to supply a square root of but will be asked to supply a square root of . This, combined with what she knows, is equivalent to knowing a square root of , which she is not able to compute. In an extreme case, Victor could send all bits equal to 0, which means Peggy must supply a square root of . With Peggy’s guess as before, this means she would know a square root of . In summary, if Peggy’s guess is not correct, she will need to know the square root of a nonempty product of ’s, which she cannot compute. Therefore, there are possible strings of bits that Victor can send, and only one will allow Peggy to fool Victor. In one iteration of the protocol, the chances are only one in that Victor will be fooled. If the procedure is repeated times, the chances are 1 in that Victor is fooled. Recommended values are and . Note that this gives the same probability as 20 iterations of the earlier scheme, so the present procedure is more efficient in terms of communication between Peggy and Victor. Of course, Victor has not obtained as strong a verification that Peggy knows, for example, , but he is very certain that Eve is not masquerading as Peggy, since Eve should not know any of the ’s.
There is an interesting feature of how the numbers are arranged in Steps 1 and 4. It is possible for Peggy to use a cryptographic hash function and send the hash of after computing it in Step 1. After Victor computes in Step 4, he can compute the hash of this number and compare with the hash sent by Peggy. The hash function is assumed to be collision resistant, so Victor can be confident that the the congruence in Step 4 is satisfied. Since the output of the hash function is probably shorter than (for example, 256 bits for the hash, compared to 2048 bits for ), this saves a few bits of transmission.
The preceding can be used to design an identification scheme. Let be a string that includes Peggy’s name, birth date, and any other information deemed appropriate. Let be a public hash function. A trusted authority Arthur (the bank, a passport agency, ...) chooses to be the product of two large primes. Arthur computes for some small values of , where means is appended to . Using his knowledge of , he can determine which of these numbers have square roots mod and calculate a square root for each such number. This yields numbers and square roots . The numbers are made public. Arthur gives the numbers to Peggy, who keeps them secret. The prime numbers are discarded once the square roots are calculated. Likewise, Arthur does not need to store once they are given to Peggy. These two facts add to the security, since someone who breaks into Arthur’s computer cannot compromise Peggy’s security. Moreover, a different can be used for each person, so it is hard to compromise the security of more than one individual at a time.
Note that since approximately half the numbers mod and half the numbers mod have square roots, the Chinese remainder theorem implies that approximately 1/4 of the numbers mod have square roots. Therefore, each has a 1/4 probability of having a square root mod . This means that Arthur should be able to produce the necessary numbers quickly.
Peggy goes to an automatic teller machine, for example. The machine reads from Peggy’s card. It downloads from a database and calculates for . It then performs the preceding procedure to verify that Peggy knows . After a few iterations, the machine is convinced that the person is Peggy and allows her to withdraw cash. A naive implementation would require a lot of typing on Peggy’s part, but at least Eve won’t get Peggy’s secret numbers. A better implementation would use chips embedded in the card and store some information in such a way that it cannot be extracted.
If Eve obtains the communications used in the transaction, she cannot determine Peggy’s secret numbers. In fact, because of the zero-knowledge nature of the protocol, Eve obtains no information on the secret numbers that can be reused in future transactions.
Consider the diagram of tunnels in Figure 19.2. Suppose each of the four doors to the central chamber is locked so that a key is needed to enter, but no key is needed to exit. Peggy claims she has the key to one of the doors. Devise a zero-knowledge protocol in which Peggy proves to Victor that she can enter the central chamber. Victor should obtain no knowledge of which door Peggy can unlock.
Suppose is a large prime, is a primitive root, and . The numbers are public. Peggy wants to prove to Victor that she knows without revealing it. They do the following:
Peggy chooses a random number .
Peggy computes and and sends to Victor.
Victor chooses or and asks Peggy to send either or
Victor checks that and that .
They repeat this procedure times, for some specified .
Suppose Peggy does not know . Why will she usually be unable to produce numbers that convince Victor?
If Peggy does not know , what is the probability that Peggy can convince Victor that she knows ?
Suppose naive Nelson tries a variant. He wants to convince Victor that he knows , so he chooses a random as before, but does not send . Victor asks for and Nelson sends it. They do this several times. Why is Victor not convinced of anything? What is the essential difference between Nelson’s scheme and Peggy’s scheme that causes this?
Naive Nelson thinks he understands zero-knowledge protocols. He wants to prove to Victor that he knows the factorization of (which equals for two large primes and ) without revealing this factorization to Victor or anyone else. Nelson devises the following procedure: Victor chooses a random integer mod , computes , and sends to Nelson. Nelson computes a square root of mod and sends to Victor. Victor checks that . Victor repeats this 20 times.
Describe how Nelson computes . You may assume that and are (see Section 3.9).
Explain how Victor can use this procedure to have a high probability of finding the factorization of . (Therefore, this is not a zero-knowledge protocol.)
Suppose Eve is eavesdropping and hears the values of each and . Is it likely that Eve obtains any useful information? (Assume no value of repeats.)
Exercise 2 gave a zero-knowledge proof that Peggy knows a discrete logarithm. Here is another method. Suppose is a large prime, is a primitive root, and . The numbers are public. Peggy wants to prove to Victor that she knows without revealing it. They do the following:
Peggy chooses a random integer with , computes , and sends to Victor.
Victor chooses a random integer with and sends to Peggy.
Peggy computes and sends to Victor.
Victor checks whether . If so, he believes that Peggy knows .
Show that the verification equation holds if the procedure is followed correctly.
Does Victor obtain any information that will allow him to compute ?
Suppose Eve finds out the values of , , and . Will she be able to determine ?
Suppose Peggy repeats the procedure with the same value of , but Victor uses different values and . How can Eve, who has listened to all communications between Victor and Peggy, determine ?
The preceding procedure is the basis for the Schnorr identification scheme. Victor could be a bank and could be Peggy’s personal identification number. The bank stores , and Peggy must prove she knows to access her account. Alternatively, Victor could be a central computer and Peggy could be logging on to the computer through nonsecure telephone lines. Peggy’s password is , and the central computer stores .
In the Schnorr scheme, is usually chosen so that has a large prime factor , and , instead of being a primitive root, is taken to satisfy . The congruence defining is then taken mod . Moreover, is taken to satisfy for some , for example, .
Peggy claims that she knows an RSA plaintext. That is, are public and Peggy claims that she knows such that . She wants to prove this to Victor using a zero-knowledge protocol. Peggy and Victor perform the following steps:
Peggy chooses a random integer and computes (assume that .)
Peggy computes and and sends to Victor.
Victor checks that .
Give the remaining steps of the protocol. Victor should be at least 99% convinced that Peggy is not lying.
Suppose that is a large prime, and . Peggy wants to prove to Victor, using a zero-knowledge protocol, that she knows a value of with . Peggy and Victor do the following:
Peggy chooses three random integers with .
Peggy computes , for and sends to Victor.
Victor checks that .
Design the remaining steps of this protocol so that Victor is at least 99% convinced that Peggy is not lying. (Note: There are two ways for Victor to proceed in Step 4. One has a higher probability of catching Peggy, if she is cheating, than the other.)
Give a reasonable method for Peggy to choose the three random numbers such that . (A method that doesn’t work is “Choose three random numbers and see if their sum is . If not, try again.)
Suppose that is the product of two large primes, and that is given. Peggy wants to prove to Victor, using a zero-knowledge protocol, that she knows a value of with . Peggy and Victor do the following:
Peggy chooses three random integers with .
Peggy computes , for and sends to Victor.
Victor checks that .
Design the remaining steps of this protocol so that Victor is at least 99% convinced that Peggy is not lying. (Note: There are two ways for Victor to proceed in Step 4. One has a higher probability of catching Peggy, if she is cheating, than the other.)
Give a reasonable method for Peggy to choose the three random numbers such that . (A method that doesn’t work is “Choose three random numbers and see if their product is . If not, try again.”)
Peggy claims that she knows an RSA plaintext. That is, are public and Peggy claims that she knows such that . Devise a zero-knowledge protocol similar to that used in Exercises 6 and 7 for Peggy to convince Victor that she knows .
In this chapter we introduce the theoretical concepts behind the security of a cryptosystem. The basic question is the following: If Eve observes a piece of ciphertext, does she gain any new information about the encryption key that she did not already have? To address this issue, we need a mathematical definition of information. This involves probability and the use of a very important measure called entropy.
Many of the ideas in this chapter originated with Claude Shannon in the 1940s.
Before we start, let’s consider an example. Roll a standard six-sided die. Let be the event that the number of dots is odd, and let be the event that the number of dots is at least 3. If someone tells you that the roll belongs to the event , then you know that there are only two possibilities for what the roll is. In this sense, tells you more about the value of the roll than just the event , or just the event . In this sense, the information contained in the event is larger than the information just in or just in .
The idea of information is closely linked with the idea of uncertainty. Going back to the example of the die, if you are told that the event happened, you become less uncertain about what the value of the roll was than if you are simply told that event occurred. Thus the information increased while the uncertainty decreased. Entropy provides a measure of the increase in information or the decrease in uncertainty provided by the outcome of an experiment.
In this section we briefly introduce the concepts from probability needed for what follows. An understanding of probability and the various identities that arise is essential for the development of entropy.
Consider an experiment with possible outcomes in a finite set . For example, could be flipping a coin and . We assume each outcome is assigned a probability. In the present example, and . Often, the outcome of an experiment is called a random variable.
In general, for each , denote the probability that by
Note that . If , let
which is the probability that takes a value in .
Often one performs an experiment where one is measuring several different events. These events may or may not be related, but they may be lumped together to form a new random event. For example, if we have two random events and with possible outcomes and , respectively, then we may create a new random event that groups the two events together. In this case, the new event has a set of possible outcomes , and is sometimes called a joint random variable.
Draw a card from a standard deck. Let be the suit of the card, so . Let be the value of the card, so . Then gives the 52 possibilities for the card. Note that if and , then is simply the probability that the card drawn has suit and value . Since all cards are equally probable, this probability is 1/52, which is the probability that (namely 1/4) times the probability that (namely 1/13). As we discuss later, this means and are independent.
Roll a die. Suppose we are interested in two things: whether the number of dots is odd and whether the number is at least 2. Let if the number of dots is even and if the number of dots is odd. Let if the number of dots is less than 2 and if the number of dots is at least 2. Then gives us the results of both experiments together. Note that the probability that the number of dots is odd and less than 2 is . This is not equal to , which is . This means that and are not independent. As we’ll see, this is closely related to the fact that knowing gives us information about .
We denote
Note that we can recover the probability that as
We say that two random events and are independent if
for all and all . In the preceding example, the suit of a card and the value of the card were independent.
We are also interested in the probabilities for given that has occurred. If , define the conditional probability of given that to be
One way to think of this is that we have restricted to the set where . This has total probability . The fraction of this sum that comes from is .
Note that and are independent if and only if
for all . In other words, the probability of is unaffected by what happens with .
There is a nice way to go from the conditional probability of given to the conditional probability of given .
If and , then
The proof consists of simply writing the conditional probabilities in terms of their definitions.
Roll a six-sided die and a ten-sided die. Which experiment has more uncertainty? If you make a guess at the outcome of each roll, you are more likely to be wrong with the ten-sided die than with the six-sided die. Therefore, the ten-sided die has more uncertainty. Similarly, compare a fair coin toss in which heads and tails are equally likely with a coin toss in which heads occur 90% of the time. Which has more uncertainty? The fair coin toss does, again because there is more randomness in its possibilities.
In our definition of uncertainty, we want to make sure that two random variables and that have same probability distribution have the same uncertainty. In order to do this, the measure of uncertainty must be a function only of the probability distributions and not of the names chosen for the outcomes.
We require the measure of uncertainty to satisfy the following properties:
To each set of nonnegative numbers with , the uncertainty is given by a number .
should be a continuous function of the probability distribution, so a small change in the probability distribution should not drastically change the uncertainty.
for all . In other words, in situations where all outcomes are equally likely, the uncertainty increases when there are more possible outcomes.
If , then
What this means is that if the th outcome is broken into two suboutcomes, with probabilities and , then the total uncertainty is increased by the uncertainty caused by the choice between the two suboutcomes, multiplied by the probability that we are in this case to begin with. For example, if we roll a six-sided die, we can record two outcomes: even and odd. This has uncertainty . Now suppose we break the outcome even into the suboutcomes 2 and . Then we have three possible outcomes: 2, , and odd. We have
The first term is the uncertainty caused by even versus odd. The second term is the uncertainty added by splitting even into two suboutcomes.
Starting from these basic assumptions, Shannon [Shannon2] showed the following:
Let be a function satisfying properties (1)–(4). In other words, for each random variable with outcomes having probabilities , the function assigns a number subject to the conditions (1)–(4). Then must be of the form
where is a nonnegative constant and where the sum is taken over those such that .
Because of the theorem, we define the entropy of the variable to be
The entropy is a measure of the uncertainty in the outcome of . Note that since , we have , so there is no such thing as negative uncertainty.
The observant reader might notice that there are problems when we have elements that have probability . In this case we define , which is justified by looking at the limit of as . It is typical convention that the logarithm is taken base 2, in which case entropy is measured in bits. The entropy of may also be interpreted as the expected value of (recall that ).
We now look at some examples.
Consider a fair coin toss. There are two outcomes, each with probability 1/2. The entropy of this random event is
This means that the result of the coin flip gives us 1 bit of information, or that the uncertainty in the outcome of the coin flip is 1 bit.
Consider a nonfair coin toss with probability of getting heads and probability of getting tails (where ). The entropy of this event is
If one considers as a function of , one sees that the entropy is a maximum when . (For a more general statement, see Exercise 14.)
Consider an -sided fair die. There are outcomes, each with probability . The entropy is
There is a relationship between entropy and the number of yes-no questions needed to determine accurately the outcome of a random event. If one considers a totally nonfair coin toss where , then . This result can be interpreted as not requiring any questions to determine what the value of the event was. If someone rolls a four-sided die, then it takes two yes-no questions to find out the outcome. For example, is the number less than 3? Is the number odd?
A slightly more subtle example is obtained by flipping two coins. Let be the number of heads, so the possible outcomes are . The probabilities are and the entropy is
Note that we can average 3/2 questions to determine the outcome. For example, the first question could be “Is there exactly one head?” Half of the time, this will suffice to determine the outcome. The other half of the time a second question is needed, for example, “Are there two heads?” So the average number of questions equals the entropy.
Another way of looking at is that it measures the number of bits of information that we obtain when we are given the outcome of . For example, suppose the outcome of is a random 4-bit number, where each possibility has probability 1/16. As computed previously, the entropy is , which says we have received four bits of information when we are told the value of .
In a similar vein, entropy relates to the minimal amount of bits necessary to represent an event on a computer (which is a binary device). See Section 20.3. There is no sense recording events whose outcomes can be predicted with 100% certainty; it would be a waste of space. In storing information, one wants to code just the uncertain parts because that is where the real information is.
If we have two random variables and , the joint entropy is defined as
This is just the entropy of the joint random variable discussed in Section 20.1.
In a cryptosystem, we might want to know the uncertainty in a key, given knowledge of the ciphertext. This leads us to the concept of conditional entropy, which is the amount of uncertainty in , given . It is defined to be
The last equality follows from the relationship . The quantity is the uncertainty in given the information that . It is defined in terms of conditional probabilities by the expression in parentheses on the second line. We calculate by forming a weighted sum of these uncertainties to get the total uncertainty in given that we know the value of .
The preceding definition of conditional entropy uses the weighted average, over the various , of the entropy of given . Note that . This sum does not have properties that information or uncertainty should have. For example, if and are independent, then this definition would imply that the uncertainty of given is greater than the uncertainty of (see Exercise 15). This clearly should not be the case.
We now derive an important tool, the chain rule for entropies. It will be useful in Section 20.4.
(Chain Rule). .
Proof
What does the chain rule tell us? It says that the uncertainty of the joint event is equal to the uncertainty of event + uncertainty of event given that event has happened.
We now state three more results about entropy.
, where denotes the number of elements in . We have equality if and only if all elements of are equally likely.
.
(Conditioning reduces entropy) , with equality if and only if and are independent.
The first result states that you are most uncertain when the probability distribution is uniform. Referring back to the example of the nonfair coin flip, the entropy was maximum for . This extends to events with more possible outcomes. For a proof of (1), see [Welsh, p. 5].
The second result says that the information contained in the pair is at most the information contained in plus the information contained in . The reason for the inequality is that possibly the information supplied by and overlap (which is when and are not independent). For a proof of (2), see [Stinson].
The third result is one of the most important results in information theory. Its interpretation is very simple. It says that the uncertainty one has in a random event given that event occurred is less than the uncertainty in event alone. That is, can only tell you information about event ; it can’t make you any more uncertain about .
The third result is an easy corollary of the second plus the chain rule:
Information theory originated in the late 1940s from the seminal papers by Claude Shannon. One of the primary motivations behind Shannon’s mathematical theory of information was the problem of finding a more compact way of representing data. In short, he was concerned with the problem of compression. In this section we briefly touch on the relationship between entropy and compression and introduce Huffman codes as a method for more succinctly representing data.
For more on how to compress data, see [Cover-Thomas] or [Nelson-Gailly].
Suppose we have an alphabet with four letters , and suppose these letters appear in a text with frequencies as follows.
We could represent as the binary string 00, as 01, as 10, and as 11. This means that the message would average two bits per letter. However, suppose we represent as 1, as 01, as 001, and as 000. Then the average number of bits per letter is
(the number of bits for times the frequency of , plus the number of bits for times the frequency of , etc.). This encoding of the letters is therefore more efficient.
In general, we have a random variable with outputs in a set . We want to represent the outputs in binary in an efficient way; namely, the average number of bits per output should be as small as possible.
An early example of such a procedure is Morse code, which represents letters as sequences of dots and dashes and was developed to send messages by telegraph. Morse asked printers which letters were used most, and made the more frequent letters have smaller representations. For example, is represented as and as . But is and is .
A more recent method was developed by Huffman. The idea is to list all the outputs and their probabilities. The smallest two are assigned 1 and 0 and then combined to form an output with a larger probability. The same procedure is then applied to the new list, assigning 1 and 0 to the two smallest, then combining them to form a new list. This procedure is continued until there is only one output remaining. The binary strings are then obtained by reading backward through the procedure, recording the bits that have been assigned to a given output and to combinations containing it. This is best explained by an example.
Suppose we have outputs with probabilities , as in the preceding example. The diagram in Figure 20.1 gives the procedure.
Note that when there were two choices for the lowest, we made a random choice for which one received 0 and which one received 1. Tracing backward through the table, we see that only received a 1, received 01, received 001, and received 000. These are exactly the assignments made previously that gave a low number of bits per letter.
A useful feature of Huffman encoding is that it is possible to read a message one letter at a time. For example, the string 011000 can only be read as ; moreover, as soon as we have read the first two bits 01, we know that the first letter is .
Suppose instead that we wrote the bits assigned to letters in reverse order, so is 10 and is 001. Then the message 101000 cannot be determined until all bits have been read, since it potentially could start with or .
Even worse, suppose we had assigned 0 to instead of 1. Then the messages and would be the same. It is possible to show that Huffman encoding avoids these two problems.
The average number of bits per output is closely related to the entropy.
Let be the average number of bits per output for Huffman encoding for the random variable . Then
This result agrees with the interpretation that the entropy measures how many bits of information are contained in the output of . We omit the proof. In our example, the entropy is
Intuitively, the one-time pad provides perfect secrecy. In Section 4.4, we gave a mathematical meaning to this statement. In the present section, we repeat some of the arguments of that section and phrase some of the ideas in terms of entropy.
Suppose we have a cipher system with possible plaintexts , ciphertexts , and keys . Each plaintext in has a certain probability of occurring; some are more likely than others. The choice of a key in is always assumed to be independent of the choice of plaintext. The possible ciphertexts in have various probabilities, depending on the probabilities for and .
If Eve intercepts a ciphertext, how much information does she obtain for the key? In other words, what is ? Initially, the uncertainty in the key was . Has the knowledge of the ciphertext decreased the uncertainty?
Suppose we have three possible plaintexts: with probabilities .5, .3, .2 and two keys with probabilities .5 and .5. Suppose the possible ciphertexts are . Let be the encryption function for the key . Suppose
Let denote the probability that the plaintext is , etc. The probability that the ciphertext is is
Similarly, we calculate and .
Suppose someone intercepts a ciphertext. This gives some information on the plaintext. For example, if the ciphertext is , then it can be deduced immediately that the plaintext was . If the ciphertext is , the plaintext was either or .
We can even say more: The probability that a ciphertext is is .25, so the conditional probability that the plaintext was , given that the ciphertext is is
Similarly, and . We can also calculate
Note that the original probabilities of the plaintexts were .5, .3, and .2; knowledge of the ciphertext allows us to revise the probabilities. Therefore, the ciphertext gives us information about the plaintext. We can quantify this via the concept of conditional entropy. First, the entropy of the plaintext is
The conditional entropy of given is
Therefore, in the present example, the uncertainty for the plaintext decreases when the ciphertext is known.
On the other hand, we suspect that for the one-time pad the ciphertext yields no information about the plaintext that was not known before. In other words, the uncertainty for the plaintext should equal the uncertainty for the plaintext given the ciphertext. This leads us to the following definition and theorem.
A cryptosystem has perfect secrecy if .
The one-time pad has perfect secrecy.
Proof. Recall that the basic setup is the following: There is an alphabet with letters (for example, could be 2 or 26). The possible plaintexts consist of strings of characters of length . The ciphertexts are strings of characters of length . There are keys, each consisting of a sequence of length denoting the various shifts to be used. The keys are chosen randomly, so each occurs with probability .
Let be a possible ciphertext. As before, we calculate the probability that occurs:
Here denotes the ciphertext obtained by encrypting using the key . The sum is over those pairs such that encrypts to . Note that we have used the independence of and to write joint probability as the product of the individual probabilities.
In the one-time pad, every key has equal probability , so we can replace in the above sum by . We obtain
We now use another important feature of the one-time pad: For each plaintext and each ciphertext , there is exactly one key such that . Therefore, every occurs exactly once in the preceding sum, so we have . But the sum of the probabilities of all possible plaintexts is 1, so we obtain
This confirms what we already suspected: Every ciphertext occurs with equal probability.
Now let’s calculate some entropies. Since and each have equal probabilities for all possibilities, we have
We now calculate in two different ways. Since knowing is the same as knowing , we have
The last equality is because and are independent. Also, knowing is the same as knowing since and determine for the one-time pad. Therefore,
The last equality is the chain rule. Equating the two expressions, and using the fact that , we obtain . This proves that the one-time pad has perfect secrecy.
The preceding proof yields the following more general result. Let denote the number of possible keys, etc.
Consider a cryptosystem such that
Every key has probability .
For each and there is exactly one such that .
Then this cryptosystem has perfect secrecy.
It is easy to deduce from condition (2) that . Conversely, it can be shown that if and the system has perfect secrecy, then (1) and (2) hold (see [Stinson, Theorem 2.4]).
It is natural to ask how the preceding concepts apply to RSA. The possibly surprising answer is that ; namely, the ciphertext determines the plaintext. The reason is that entropy does not take into account computation time. The fact that it might take billions of years to factor is irrelevant. What counts is that all the information needed to recover the plaintext is contained in the knowledge of , , and .
The more relevant concept for RSA is the computational complexity of breaking the system.
In an English text, how much information is obtained per letter? If we had a random sequence of letters, each appearing with probability 1/26, then the entropy would be ; so each letter would contain 4.7 bits of information. If we include spaces, we get . But the letters are not equally likely: has frequency .082, has frequency .015, etc. (see Section 2.3). Therefore, we consider
However, this doesn’t tell the whole story. Suppose we have the sequence of letters we are studyin. There is very little uncertainty as to what the last letter is; it is easy to guess that it is . Similarly, if we see the letter , it is extremely likely that the next letter is . Therefore, the existing letters often give information about the next letter, which means that there is not as much additional information carried by that letter. This says that the entropy calculated previously is still too high. If we use tables of the frequencies of the digrams (a digram is a two-letter combination), we can calculate the conditional entropy of one letter, given the preceding letter, to be 3.56. Using trigram frequencies, we find that the conditional entropy of a letter, given the preceding two letters, is approximately 3.3. This means that, on the average, if we know two consecutive letters in a text, the following letter carries 3.3 bits of additional information. Therefore, if we have a long text, we should expect to be able to compress it at least by a factor of around
Let represent the letters of English. Let denote the -gram combinations. Define the entropy of English to be
where denotes the entropy of -grams. This gives the average amount of information per letter in a long text, and it also represents the average amount of uncertainty in guessing the next letter, if we already know a lot of the text. If the letters were all independent of each other, so the probability of the digram equaled the probability of times the probability of , then we would have , and the limit would be , which is the entropy for one-letter frequencies. But the interactions of letters, as noticed in the frequencies for digrams and trigrams, lower the value of .
How do we compute ? Calculating 100-gram frequencies is impossible. Even tabulating the most common of them and getting an approximation would be difficult. Shannon proposed the following idea.
Suppose we have a machine that is an optimal predictor, in the sense that, given a long string of text, it can calculate the probabilities for the letter that will occur next. It then guesses the letter with highest probability. If correct, it notes the letter and writes down a 1. If incorrect, it guesses the second most likely letter. If correct, it writes down a 2, etc. In this way, we obtain a sequence of numbers. For example, consider the text . Suppose the predictor says that is the most likely for the 1st letter, and it is wrong; its second guess is , which is correct, so we write the and put 2 below it. The predictor then predicts that is the next letter, which is correct. We put 1 beneath the . Continuing, suppose it finds on its 1st guess, etc. We obtain a situation like the following:
Using the prediction machine, we can reconstruct the text. The prediction machine says that its second guess for the first letter will be , so we know the 1st letter is . The predictor says that its first guess for the next letter is , so we know that’s next. The first guess for the next is , etc.
What this means is that if we have a machine for predicting, we can change a text into a string of numbers without losing any information, because we can reconstruct the text. Of course, we could attempt to write a computer program to do the predicting, but Shannon suggested that the best predictor is a person who speaks English. Of course, a person is unlikely to be as deterministic as a machine, and repeating the experiment (assuming the person forgets the text from the first time) might not yield an identical result. So reconstructing the text might present a slight difficulty. But it is still a reasonable assumption that a person approximates an optimal predictor.
Given a sequence of integers corresponding to a text, we can count the frequency of each number. Let
Since the text and the sequence of numbers can be reconstructed from each other, their entropies must be the same. The largest the entropy can be for the sequence of numbers is when these numbers are independent. In this case, the entropy is . However, the numbers are probably not independent. For example, if there are a couple consecutive 1s, then perhaps the predictor has guessed the rest of the word, which means that there will be a few more 1s. However, we get an upper bound for the entropy, which is usually better than the one we obtain using frequencies of letters. Moreover, Shannon also found a lower bound for the entropy. His results are
Actually, these are only approximate upper and lower bounds, since there is experimental error, and we are really considering a limit as .
These results allow an experimental estimation of the entropy of English. Alice chooses a text and Bob guesses the first letter, continuing until the correct guess is made. Alice records the number of guesses. Bob then tries to guess the second letter, and the number of guesses is again recorded. Continuing in this way, Bob tries to guess each letter. When he is correct, Alice tells him and records the number of guesses. Shannon gave Table 20.1 as a typical result of an experiment. Note that he included spaces, but ignored punctuation, so he had 27 possibilities: There are 102 symbols. There are seventy-nine 1s, eight 2s, three 3s, etc. This gives
The upper bound for the entropy is therefore
Note that since we are using , the terms with can be omitted. The lower bound is
A reasonable estimate is therefore that the entropy of English is near 1, maybe slightly more than 1.
If we want to send a long English text, we could write each letter (and the space) as a string of five bits. This would mean that a text of length 102, such as the preceding, would require 510 bits. It would be necessary to use something like this method if the letters were independent and equally likely. However, suppose we do a Huffman encoding of the message from Table 20.1. Let
All other numbers up to 27 can be represented by various combinations of six or more bits. To send the message requires
which is 1.68 bits per letter.
Note that five bits per letter is only slightly more than the “random” entropy 4.75, and 1.68 bits per letter is slightly more than our estimate of the entropy of English. These agree with the result that entropy differs from the average length of a Huffman encoding by at most 1.
One way to look at the preceding entropy calculations is to say that English is around 75% redundant. Namely, if we send a long message in standard written English, compared to the optimally compressed text, the ratio is approximately 4 to 1 (that is, the random entropy 4.75 divided by the entropy of English, which is around 1). In our example, we were close, obtaining a ratio near 3 to 1 (namely 4.75/1.68).
Define the redundancy of English to be
Then is approximately , which is the 75% redundancy mentioned previously.
Suppose we have a ciphertext. How many keys will decrypt it to something meaningful? If the text is long enough, we suspect that there is a unique key and a unique corresponding plaintext. The unicity distance for a cryptosystem is the length of ciphertext at which one expects that there is a unique meaningful plaintext. A rough estimate for the unicity distance is
where is the number of possible keys, is the number of letters or symbols, and is the redundancy (see [Stinson]). We’ll take (whether we include spaces in our language or not; the difference is small).
For example, consider the substitution cipher, which has keys. We have
This means that if a ciphertext has length 25 or more, we expect that usually there is only one possible meaningful plaintext. Of course, if we have a ciphertext of length 25, there are probably several letters that have not appeared. Therefore, there could be several possible keys, all of which decrypt the ciphertext to the same plaintext.
As another example, consider the affine cipher. There are keys, so
This should be regarded as only a very rough approximation. Clearly it should take a few more letters to get a unique decryption. But the estimate of 2.35 indicates that very few letters suffice to yield a unique decryption in most cases for the affine cipher.
Finally, consider the one-time pad for a message of length . The encryption is a separate shift mod 26 for each letter, so there are keys. We obtain the estimate
In this case, it says we need more letters than the entire ciphertext to get a unique decryption. This reflects the fact that all plaintexts are possible for any ciphertext.
Let and be two independent tosses of a fair coin. Find the entropy and the joint entropy . Why is ?
Consider an unfair coin where the two outcomes, heads and tails, have probabilities and .
If the coin is flipped two times, what are the possible outcomes along with their respective probabilities?
Show that the entropy in part (a) is . How could this have been predicted without calculating the probabilities in part (a)?
A random variable takes the values with probabilities . Calculate the entropy .
Let be a random variable taking on integer values. The probability is 1/2 that is in the range , with all such values being equally likely, and the probability is 1/2 that the value is in the range , with all such values being equally likely. Compute .
Let be a random event taking on the values , all with positive probability. What is the general inequality/equality between and , where is the following?
In this problem we explore the relationship between the entropy of a random variable and the entropy of a function of the random variable. The following is a short proof that shows . Explain what principles are used in each of the steps.
Letting take on the values and letting , show that it is possible to have .
In part (a), show that you have equality if and only if is a one-to-one function (more precisely, is one-to-one on the set of outputs of that have nonzero probability).
The preceding results can be used to study the behavior of the run length coding of a sequence. Run length coding is a technique that is commonly used in data compression. Suppose that are random variables that take the values or . This sequence of random variables can be thought of as representing the output of a binary source. The run length coding of is a sequence that represents the lengths of consecutive symbols with the same value. For example, the sequence has a run length sequence of . Observe that L is a function of . Show that L and uniquely determine . Do L and determine ? Using these observations and the preceding results, compare , , and .
A bag contains five red balls, three white balls, and two black balls that are identical to each other in every manner except color.
Choose two balls from the bag with replacement. What is the entropy of this experiment?
What is the entropy of choosing two balls without replacement? (Note: In both parts, the order matters; i.e., red then white is not the same as white then red.)
We often run into situations where we have a sequence of random events. For example, a piece of text is a long sequence of letters. We are concerned with the rate of growth of the joint entropy as increases. Define the entropy rate of a sequence of random events as
A very crude model for a language is to assume that subsequent letters in a piece of text are independent and come from identical probability distributions. Using this, show that the entropy rate equals .
In general, there is dependence among the random variables. Assume that have the same probability distribution but are somehow dependent on each other (for example, if I give you the letters TH you can guess that the next letter is E). Show that
and thus that
(if the limit defining exists).
Suppose we have a cryptosystem with only two possible plaintexts. The plaintext occurs with probability and occurs with probability . There are two keys, and , and each is used with probability . Key encrypts to and to . Key encrypts to and to .
Calculate , the entropy for the plaintext.
Calculate , the conditional entropy for the plaintext given the ciphertext. (Optional hint: This can be done with no additional calculation by matching up this system with another well-known system.)
Consider a cryptosystem .
Explain why .
Suppose the system has perfect secrecy. Show that
and
Suppose the system has perfect secrecy and, for each pair of plaintext and ciphertext, there is at most one corresponding key that does the encryption. Show that .
Prove that for a cryptosystem we have
Consider a Shamir secret sharing scheme where any five people of a set of 20 can determine the secret , but no fewer can do so. Let be the entropy of the choice of , and let be the conditional entropy of , given the information supplied to the first person. What are the relative sizes of and ? (Larger, smaller, equal?)
Let be a random event taking on the values , all with equal probability.
What is the entropy ?
Let . What is ?
Show that the maximum of for occurs when .
Let for . Show that the maximum of
subject to the constraint , occurs when . (Hint: Lagrange multipliers could be useful in this problem.)
Suppose we define . Show that if and are independent, and has possible outputs, then .
Use (a) to show that is not a good description of the uncertainty of given .
In the mid-1980s, Miller and Koblitz introduced elliptic curves into cryptography, and Lenstra showed how to use elliptic curves to factor integers. Since that time, elliptic curves have played an increasingly important role in many cryptographic situations. One of their advantages is that they seem to offer a level of security comparable to classical cryptosystems that use much larger key sizes. For example, it is estimated in [Blake et al.] that certain conventional systems with a 4096-bit key size can be replaced by 313-bit elliptic curve systems. Using much shorter numbers can represent a considerable savings in hardware implementations.
In this chapter, we present some of the highlights. For more details on elliptic curves and their cryptologic uses, see [Blake et al.], [Hankerson et al.], or [Washington]. For a list of elliptic curves recommended by NIST for cryptographic uses, see [FIPS 186-2].
An elliptic curve is the graph of an equation
where are in whatever is the appropriate set (rational numbers, real numbers, integers mod etc.). In other words, let be the rational numbers, the real numbers, or the integers mod a prime (or, for those who know what this means, any field of characteristic not 2; but see Section 21.4). Then we assume and take to be
As will be discussed below, it is also convenient to include a point which often will be denoted simply by
Let’s consider the case of real numbers first, since this case allows us to work with pictures. The graph has two possible forms, depending on whether the cubic polynomial has one real root or three real roots. For example, the graphs of and are the following:
The case of two components (for example, ) occurs when the cubic polynomial has three real roots. The case of one component (for example, ) occurs when the cubic polynomial has only one real root.
For technical reasons that will become clear later, we also include a “point at infinity,” denoted which is most easily regarded as sitting at the top of the . It can be treated rigorously in the context of projective geometry (see [Washington]), but this intuitive notion suffices for what we need. The bottom of the is identified with the top, so also sits at the bottom of the .
Now let’s look at elliptic curves mod where is a prime. For example, let be given by
We can list the points on by letting run through the values and solving for
Note that we again include a point
Elliptic curves mod are finite sets of points. It is these elliptic curves that are useful in cryptography.
Technical point: We assume that the cubic polynomial has no multiple roots. This means we exclude, for example, the graph of Such curves will be discussed in Subsection 21.3.1.
Technical point: For most situations, equations of the form suffice for elliptic curves. In fact, in situations where we can divide by 3, a change of variables changes an equation into an equation of the form See Exercise 1. However, sometimes it is necessary to consider elliptic curves given by equations of the form
where are constants. If we are working mod where is prime, or if we are working with real, rational, or complex numbers, then simple changes of variables transform the present equation into the form However, if we are working mod 2 or mod 3, or with a finite field of characteristic 2 or 3 (that is, or ), then we need to use the more general form. Elliptic curves over fields of characteristic 2 will be mentioned briefly in Section 21.4.
Historical point: Elliptic curves are not ellipses. They received their name from their relation to elliptic integrals such as
that arise in the computation of the arc length of ellipses.
The main reason elliptic curves are important is that we can use any two points on the curve to produce a third point on the curve. Given points and on we obtain a third point on as follows (see Figure 21.1): Draw the line through and (if take the tangent line to at ). The line intersects in a third point Reflect through the (i.e., change to ) to get Define a law of addition on by
Note that this is not the same as adding points in the plane.
Suppose is defined by Let and The line through and is
Substituting into the equation for yields
which yields Since intersects in and we already know two roots, namely and Moreover, the sum of the three roots is minus the coefficient of (Exercise 1) and therefore equals 1. If is the third root, then
so the third point of intersection has Since we have and Reflect across the to obtain
Now suppose we want to add to itself. The slope of the tangent line to at is obtained by implicitly differentiating the equation for
where we have substituted from In this case, the line is Substituting into the equation for yields
hence The sum of the three roots is 64 (= minus the coefficient of ). Because the line is tangent to it follows that is a double root. Therefore,
so the third root is The corresponding value of (use the equation of ) is Changing to yields
What happens if we try to compute ? We make the convention that the lines through are vertical. Therefore, the line through and intersects in and also in When we reflect across the , we get back Therefore,
We can also subtract points. First, observe that the line through and is vertical, so the third point of intersection with is The reflection across the is still (that’s what we meant when we said sits at the top and at the bottom of the ). Therefore,
Since plays the role of an additive identity (in the same way that 0 is the identity for addition with integers), we define
To subtract points simply add and
Another way to express the addition law is to say that
(See Exercise 17.)
For computations, we can ignore the geometrical interpretation and work only with formulas, which are as follows:
Let be given by and let
Then
where
and
If the slope is infinite, then There is one additional law: for all points
It can be shown that the addition law is associative:
It is also commutative:
When adding several points, it therefore doesn’t matter in what order the points are added nor how they are grouped together. In technical terms, we have found that the points of form an abelian group. The point is the identity element of this group.
If is a positive integer and is a point on an elliptic curve, we can define
We can extend this to negative For example, where is the reflection of across the . The associative law means that we can group the summands in any way we choose when computing a multiple of a point. For example, suppose we want to compute We do the additive version of successive squaring that was used in modular exponentiation:
The associative law means, for example, that can be computed as It also could have been computed in what might seem to be a more natural way as but this is slower because it requires three additions instead of two.
For more examples, see Examples 41–44 in the Computer Appendices.
If is a prime, we can work with elliptic curves mod using the aforementioned ideas. For example, consider
The points on are the pairs mod 5 that satisfy the equation, along with the point at infinity. These can be listed as follows. The possibilities for mod 5 are 0, 1, 2, 3, 4. Substitute each of these into the equation and find the values of that solve the equation:
The points on are
The addition of points on an elliptic curve mod is done via the same formulas as given previously, except that a rational number must be treated as where This requires that
More generally, it is possible to develop a theory of elliptic curves mod for any integer In this case, when we encounter a fraction we need to have The situations where this fails form the key to using elliptic curves for factorization, as we’ll see in Section 21.3. There are various technical problems in the general theory that arise when but the method to overcome these will not be needed in the following. For details on how to treat this case, see [Washington]. For our purposes, when we encounter an elliptic curve mod a composite we can pretend is prime. If something goes wrong, we usually obtain useful information about for example its factorization.
Let’s compute on the curve just considered. The slope is
Therefore,
This means that
Here is a somewhat larger example. Let Let
Let’s compute To get the slope of the tangent line, we differentiate implicitly and evaluate at
But we are working mod 2773. Using the extended Euclidean algorithm (see Section 3.2), we find that so we can replace by 2311. Therefore,
The formulas yield
The final answer is
Now that we’re done with the example, we mention that is not prime. When we try to calculate in Section 21.3, we’ll obtain the factorization of
Let be an elliptic curve, where is prime. We can list the points on by letting and seeing when is a square mod Since half of the nonzero numbers are squares mod we expect that will be a square approximately half the time. When it is a nonzero square, there are two square roots: and Therefore, approximately half the time we get two values of and half the time we get no Therefore, we expect around points. Including the point we expect a total of approximately points. In the 1930s, H. Hasse made this estimate more precise.
Suppose has points. Then
The proof of this theorem is well beyond the scope of this book (for a proof, see [Washington]). It can also be shown that whenever and satisfy the inequality of the theorem, there is an elliptic curve mod with exactly points.
If is large, say around it is infeasible to count the points on an elliptic curve by listing them. More sophisticated algorithms have been developed by Schoof, Atkin, Elkies, and others to deal with this problem. See the Sage Appendix.
Recall the classical discrete logarithm problem: We know that for some and we want to find There is an elliptic curve version: Suppose we have points on an elliptic curve and we know that for some integer We want to find This might not look like a logarithm problem, but it is clearly the analog of the classical discrete logarithm problem. Therefore, it is called the discrete logarithm problem for elliptic curves.
There is no good general attack on the discrete logarithm problem for elliptic curves. There is an analog of the Pohlig-Hellman attack that works in some situations. Let be an elliptic curve mod a prime and let be the smallest integer such that If has only small prime factors, then it is possible to calculate the discrete logarithm mod the prime powers dividing and then use the Chinese remainder theorem to find (see Exercise 25). The Pohlig-Hellman attack can be thwarted by choosing and so that has a large prime factor.
There is no replacement for the index calculus attack described in Section 10.2. This is because there is no good analog of “small.” You might try to use points with small coordinates in place of the “small primes,” but this doesn’t work. When you factor a number by dividing off the prime factors one by one, the quotients get smaller and smaller until you finish. On an elliptic curve, you could have a point with fairly small coordinates, subtract off a small point, and end up with a point with large coordinates (see Computer Problem 5). So there is no good way to know when you are making progress toward expressing a point in terms of the factor base of small points.
The Baby Step, Giant Step attack on discrete logarithms works for elliptic curves (Exercise 13(b)), although it requires too much memory to be practical in most situations. For other attacks, see [Blake et al.] and [Washington].
In most cryptographic systems, we must have a method for mapping our original message into a numerical value upon which we can perform mathematical operations. In order to use elliptic curves, we need a method for mapping a message onto a point on an elliptic curve. Elliptic curve cryptosystems then use elliptic curve operations on that point to yield a new point that will serve as the ciphertext.
The problem of encoding plaintext messages as points on an elliptic curve is not as simple as it was in the conventional case. In particular, there is no known polynomial time, deterministic algorithm for writing down points on an arbitrary elliptic curve However, there are fast probabilistic methods for finding points, and these can be used for encoding messages. These methods have the property that with small probability they will fail to produce a point. By appropriately choosing parameters, this probability can be made arbitrarily small, say on the order of
Here is one method, due to Koblitz. The idea is the following. Let be the elliptic curve. The message (already represented as a number) will be embedded in the of a point. However, the probability is only about 1/2 that is a square mod Therefore, we adjoin a few bits at the end of and adjust them until we get a number such that is a square mod
More precisely, let be a large integer so that a failure rate of is acceptable when trying to encode a message as a point. Assume that satisfies The message will be represented by a number where For compute and try to calculate the square root of For example, if the method of Section 3.9 can be used. If there is a square root then we take otherwise, we increment by one and try again with the new We repeat this until either we find a square root or If ever equals then we fail to map a message to a point. Since is a square approximately half of the time, we have about a chance of failure.
In order to recover the message from the point we simply calculate by
where denotes the greatest integer less than or equal to
Let and suppose that our elliptic curve is If we are satisfied with a failure rate of then we may take Since we need we need Suppose our message is We consider of the form The possible choices for are For we get and Thus, we represent the message by the point The message can be recovered by
Suppose is a number we wish to factor. Choose a random elliptic curve mod and a point on the curve. In practice, one chooses several (around 14 for numbers around 50 digits; more for larger integers) curves with points and runs the algorithm in parallel.
How do we choose the curve? First, choose a point and a coefficient Then choose so that lies on the curve This is much more efficient than choosing and and then trying to find a point.
For example, let Take and Since we want we take Therefore, our curve is
We calculated in a previous example. Note that during the calculation, we needed to find This required that and used the extended Euclidean algorithm, which was essentially a gcd calculation.
Now let’s calculate The line through the points and has slope 702/1770. When we try to invert 1770 mod 2773, we find that so we cannot do this. So what do we do? Our original goal was to factor 2773, so we don’t need to do anything more. We have found the factor 59, which yields the factorization
Here’s what happened. Using the Chinese remainder theorem, we can regard as a pair of elliptic curves, one mod 59 and the other mod 47. It turns out that while Therefore, when we tried to compute we had a slope that was infinite mod 59 but finite mod 47. In other words, we had a denominator that was 0 mod 59 but nonzero mod 47. Taking the gcd allowed us to isolate the factor 59.
The same type of idea is the basis for many factoring algorithms. If you cannot separate and as long as they behave identically. But if you can find something that makes them behave slightly differently, then they can be separated. In the example, the multiples of reached faster mod 59 than mod 47. Since in general the primes and should act fairly independently of each other, one would expect that for most curves and points the multiples of would reach mod and mod at different times. This will cause the gcd to find either or
Usually, it takes several more steps than 3 or 4 to reach mod or mod In practice, one multiplies by a large number with many small prime factors, for example, 10000!. This can be done via successive doubling (the additive analog of successive squaring; see Exercise 21). The hope is that this multiple of is either mod or mod This is very much the analog of the method of factoring. However, recall that the method (see Section 9.4) usually doesn’t work when has a large prime factor. The same type of problem could occur in the elliptic curve method just outlined when the number such that equals has a large prime factor. If this happens (so the method fails to produce a factor after a while), we simply change to a new curve This curve will be independent of the previous curve and the value of such that should have essentially no relation to the previous After several tries (or if several curves are treated in parallel), a good curve is often found, and the number is factored. In contrast, if the method fails, there is nothing that can be changed other than using a different factorization method.
We want to factor Choose
Suppose we try to compute There are many ways to do this. One is to compute If we do this, everything is fine through but requires inverting Since we can factor as
Let’s examine this more closely. A computation shows that has points and has points. Moreover, 640 is the smallest positive such that on and 777 is the smallest positive such that on Since is a multiple of 640, it is easy to see that on as we calculated. Since is not a multiple of it follows that on Recall that we obtain when we divide by 0, so calculating asked us to divide by This is why we found the factor 599.
For more examples, see Examples 45 and 46 in the Computer Appendices.
In general, consider an elliptic curve for some prime The smallest positive such that on this curve divides the number of points on (if you know group theory, you’ll recognize this as a corollary of Lagrange’s theorem), so Quite often, will be or a large divisor of In any case, if is a product of small primes, then will be a multiple of for a reasonably small value of Therefore,
A number that has only small prime factors is called smooth. More precisely, if all the prime factors of an integer are less than or equal to then it is called B-smooth. This concept played a role in the method and the factoring method (Section 9.4), and the index calculus attack on discrete logarithms (Section 10.2).
Recall from Hasse’s theorem that is an integer near It is possible to show that the density of smooth integers is large enough (we’ll leave small and large undefined here) that if we choose a random elliptic curve then there is a reasonable chance that the number is smooth. This means that the elliptic curve factorization method should find for this choice of the curve. If we try several curves where then it is likely that at least one of the curves or will have its number of points being smooth.
In summary, the advantage of the elliptic curve factorization method over the method is the following. The method requires that is smooth. The elliptic curve method requires only that there are enough smooth numbers near so that at least one of some randomly chosen integers near is smooth. This means that elliptic curve factorization succeeds much more often than the method.
The elliptic curve method seems to be best suited for factoring numbers of medium size, say around 40 or 50 digits. These numbers are no longer used for the security of factoring-based systems such as RSA, but it is sometimes useful in other situations to have a fast factorization method for such numbers. Also, the elliptic curve method is effective when a large number has a small prime factor, say of 10 or 20 decimal digits. For large numbers where the prime factors are large, the quadratic sieve and number field sieve are superior (see Section 9.4).
In practice, the case where the cubic polynomial has multiple roots rarely arises. But what happens if it does? Does the factorization algorithm still work? The discriminant is zero if and only if there is a multiple root (this is the cubic analog of the fact that has a double root if and only if ). Since we are working mod the result says that there is a multiple root mod if and only if the discriminant is 0 mod Since is composite, there is also the intermediate case where the gcd of and the discriminant is neither 1 nor But this gives a nontrivial factor of so we can stop immediately in this case.
Let’s look at an example:
Given a point on this curve, we associate the number
It can be shown that adding the points on the curve corresponds to multiplying the corresponding numbers. The formulas still work, as long as we don’t use the point Where does this come from? The two lines tangent to the curve at are and This number is simply the ratio of these two expressions.
Since we need to work mod we give an example mod 143. We choose 143 since 3 is a square mod 143; in fact, If this were not the case, things would become more technical with this curve. We could easily rectify the situation by choosing a new curve.
Consider the point on Look at its multiples:
When trying to compute we find the factor 11 of 143.
Recall that we are assigning numbers to each point on the curve, other than (1,1). Since we are working mod 143, we use 82 in place of Therefore, the number corresponding to is We can compute the numbers for all the points above:
Let’s compare with the powers of 80 mod 143:
We get the same numbers. This is simply the fact mentioned previously that the addition of points on the curve corresponds to multiplication of the corresponding numbers. Moreover, note that but not mod 13. This corresponds to the fact that 5 times the point is mod 11 but not mod 13. Note that 1 is the multiplicative identity for multiplication mod 11, while is the additive identity for addition on the curve.
It is easy to see from the preceding that factorization using the curve is essentially the same as using the classical factorization method (see Section 9.4).
In the preceding example, the cubic equation had a double root. An even worse possibility is the cubic having a triple root. Consider the curve
To a point on this curve, associate the number Let’s start with the point and compute its multiples:
Note that the corresponding numbers are Adding the points on the curve corresponds to adding the numbers
If we are using the curve to factor we need to change the points to integers mod which requires finding inverses for and mod This is done by the extended Euclidean algorithm, which is essentially a gcd computation. We find a factor of when Therefore, this method is essentially the same as computing in succession until a factor is found. This is a slow version of trial division, the oldest factorization technique known. Of course, in the elliptic curve factorization algorithm, a large multiple of is usually computed. This is equivalent to factoring by computing a method that is often used to test for prime factors up to
In summary, we see that the method and trial division are included in the elliptic curve factorization algorithm if we allow singular curves.
Many applications use elliptic curves mod 2, or elliptic curves defined over the finite fields (these are described in Section 3.11). This is often because mod 2 adapts well to computers. In 1999, NIST recommended 15 elliptic curves for cryptographic uses (see [FIPS 186-2]). Of these, 10 are over finite fields
If we’re working mod 2, the equations for elliptic curves need to be modified slightly. There are many reasons for this. For example, the derivative of is since is the same as 0. This means that the tangent lines we compute are vertical, so for all points A more sophisticated explanation is that the curve has singularities (points where the partial derivatives with respect to and simultaneously vanish).
The equations we need are of the form
where are constants. The addition law is slightly more complicated. We still have three points adding to infinity if and only if they lie on a line. Also, the lines through are vertical. But, as we’ll see in the following example, finding from is not the same as before.
Let As before, we can list the points on
Let’s compute The line through these two points is Substituting into the equation for yields which can rewritten as The roots are Therefore, the third point of intersection also has Since it lies on the line it must be (This might be puzzling. What is happening is that the line is tangent to at and also intersects in the point ) As before, we now have
To get we need to compute This means we need to find such that A line through is still a vertical line. In this case, we need one through so we take This intersects in the point We conclude that Putting everything together, we see that
In most applications, elliptic curves mod 2 are not large enough. Therefore, elliptic curves over finite fields are used. For an introduction to finite fields, see Section 3.11. However, in the present section, we only need the field which we now describe.
Let
with the following laws:
for all
for all
for all
Addition and multiplication are commutative and associative, and the distributive law holds: for all
Since
we see that is the multiplicative inverse of Therefore, every nonzero element of has a multiplicative inverse.
Elliptic curves with coefficients in finite fields are treated just like elliptic curves with integer coefficients.
Consider
where is as before. Let’s list the points of with coordinates in
The points on are therefore
Let’s compute The line through these two points is Substitute this into the equation for
which becomes This has the roots The third point of intersection of the line and is therefore so
We need namely the point with The vertical line intersects in so
For cryptographic purposes, elliptic curves are used over fields with large, say at least 150.
Elliptic curve versions exist for many cryptosystems, in particular those involving discrete logarithms. An advantage of elliptic curves compared to working with integers mod is the following. In the integers, it is possible to use the factorization of integers into primes (especially small primes) to attack the discrete logarithm problem. This is known as the index calculus and is described in Section 10.2. There seems to be no good analog of this method for elliptic curves. Therefore, it is possible to use smaller primes, or smaller finite fields, with elliptic curves and achieve a level of security comparable to that for much larger integers mod This allows great savings in hardware implementations, for example.
In the following, we describe three elliptic curve versions of classical algorithms. Here is a general procedure for changing a classical system based on discrete logarithms into one using elliptic curves:
| Nonzero numbers mod | Points on an elliptic curve | |
| Multiplication mod | Elliptic curve addition | |
| 1 (multiplicative identity) | (additive identity) | |
| Division mod | Subtraction of points | |
| Exponentiation: | Integer times a point: | |
| number of points on the curve | ||
| Fermat: | (Lagrange’s theorem) | |
| Discrete log problem: | Elliptic curve discrete log problem: | |
| Solve for | Solve for |
Notes:
The elliptic curve is an elliptic curve mod some prime, so the number of points on the curve, including is finite.
Addition and subtraction of points on an elliptic curve are of equivalent complexity (if then and is computed as ), but multiplication mod is much easier than division mod (via the extended Euclidean algorithm). Both mod operations are usually simpler than the elliptic curve operations.
The elliptic curve discrete log problem is believed to be harder than the mod discrete log problem.
If we fix a number and look at the set of all integers mod then the analogues of the above are: addition mod the additive identity 0, subtraction mod multiplying an integer times a number mod (that is, ), the number of integers mod the relation and the additive discrete log problem: Solve for which can be done easily via the Extended Euclidean algorithm. This shows that the difficulty of a discrete log problem depends on the binary operation.
We recall the non-elliptic curve version. Alice wants to send a message to Bob, so Bob chooses a large prime and an integer He also chooses a secret integer and computes Bob makes public and keeps secret. Alice chooses a random and computes and where
She sends to Bob, who then decrypts by calculating
Now we describe the elliptic curve version. Bob chooses an elliptic curve where is a large prime. He chooses a point on and a secret integer He computes
The points and are made public, while is kept secret. Alice expresses her message as a point on (see Section 21.5). She chooses a random integer computes
and sends the pair to Bob. Bob decrypts by calculating
A more workable version of this system is due to Menezes and Vanstone. It is described in [Stinson1, p. 189].
We must first generate a curve. Let’s use the prime the point and To make lie on the curve we take Alice has a message, represented as a point that she wishes to send to Bob. Here is how she does it.
Bob has chosen a secret random number and has published the point
Alice downloads this and chooses a random number She sends Bob and He first calculates He now subtracts this from
Note that we subtracted points by using the rule from Section 21.1.
For another example, see Example 47 in the Computer Appendices.
Alice and Bob want to exchange a key. In order to do so, they agree on a public basepoint on an elliptic curve Let’s choose and and This forces us to choose in order to have the point on the curve. Alice chooses randomly and Bob chooses randomly. Let’s suppose and They keep these private to themselves but publish and In our case, we have
Alice now takes and multiplies by to get the key:
Similarly, Bob takes and multiplies by to get the key:
Notice that they have the same key.
For another example, see Example 48 in the Computer Appendices.
There is an elliptic curve analog of the procedure described in Section 13.2. A few modifications are needed to account for the fact that we are working with both integers and points on an elliptic curve.
Alice wants to sign a message (which might actually be the hash of a long message). We assume is an integer. She fixes an elliptic curve where is a large prime, and a point on We assume that the number of points on has been calculated and assume (if not, choose a larger ). Alice also chooses a private integer and computes The prime the curve the integer and the points and are made public. To sign the message, Alice does the following:
Chooses a random integer with and and computes
Computes
Sends the signed message to Bob
Note that is a point on and and are integers.
Bob verifies the signature as follows:
Downloads Alice’s public information
Computes and
Declares the signature valid if
The verification procedure works because
There is a subtle point that should be mentioned. We have used in this verification equation as the integer mod satisfying Therefore, is not 1, but rather an integer congruent to 1 mod So for some integer It can be shown that Therefore,
This shows that and cancel each other in the verification equation, as we implicitly assumed above.
The classical ElGamal scheme and the present elliptic curve version are analogs of each other. The integers mod are replaced with the elliptic curve and the number becomes Note that the calculations in the classical scheme work with integers that are nonzero mod and there are such congruence classes. The elliptic curve version works with points on the elliptic curve that are multiples of and the number of such points is a divisor of
The use of the of in the elliptic version is somewhat arbitrary. Any method of assigning integers to points on the curve would work. Using the is an easy choice. Similarly, in the classical ElGamal scheme, the use of the integer in the mod equation for might seem a little unnatural, since was originally defined mod However, any method of assigning integers to the integers mod would work (see Exercise 16 in Chapter 13). The use of itself is an easy choice.
There is an elliptic curve version of the Digital Signature Algorithm that is similar to the preceding (Exercise 24).
Let be a cubic polynomial with roots Show that
Write Show that
with and (Remark: This shows that a simple change of variables allows us to consider the case where the coefficient of is 0.)
Let be the elliptic curve
List the points on (don’t forget ).
Evaluate the elliptic curve addition
List the points on the elliptic curve
Find the sum on
Find the sum on
Let be the elliptic curve
Evaluate
Evaluate
Evaluate
Find the sum of the points (1, 2) and (6,3) on the elliptic curve
Eve tries to find the sum of the points (1,2) and (6,3) on the elliptic curve What information does she obtain?
Show that if is a point on an elliptic curve, then
Find an elliptic curve mod 101 such that is a point on the curve.
The point lies on the elliptic curve defined over the rational numbers. use the addition law to find another point with positive rational coordinates that lies on this curve.
Show that on satisfies (Hint: Compute then use Exercise 6.)
Your computations in (a) probably have shown that and Use this to show that the points are distinct.
Factor by the elliptic curve method by using the elliptic curve and calculating 3 times the point
Factor by the elliptic curve method by using the elliptic curve and the point
Suppose you want to factor a composite integer by using the elliptic curve method. You start with the curve and the point Why will this not yield the factorization of ?
Devise an analog of the procedure in Exercise 11(a) in Chapter 10 that uses elliptic curves.
Let The elliptic curve has 999984 points. Suppose you are given points and on and are told that there is an integer such that
Describe a birthday attack that is expected to find
Describe how the Baby Step, Giant Step method (see Section 10.2) finds
Let and be points on an elliptic curve Peggy claims that she knows an integer such that and she wants to convince Victor that she knows without giving Victor any information about They perform a zero-knowledge protocol. The first step is the following:
Peggy chooses a random integer and lets She computes and and sends them to Victor.
Give the remaining steps. Victor wants to be at least sure that Peggy knows (Technical note: You may regard and as numbers mod where Without congruences, Victor obtains some information about the size of
Nontechnical note: The “Technical note” may be ignored when solving the problem.)
Find all values of mod 35 such that is a point on the curve
Suppose is a product of two large primes and let Bob wants to find some points on
Bob tries choosing a random computing and finding the square root of this number mod when the square root exists. Why will this strategy probably fail if Bob does not know and ?
Suppose Bob knows and Explain how Bob can use the method of part (a) successfully? (Hint: He needs to use the Chinese Remainder Theorem.)
Show that if are points on an elliptic curve, then
Eve is trying to find an elliptic curve discrete log: She has points and on an elliptic curve such that for some There are approximately points on so assume that She makes two lists and looks for a match. The first list is for randomly chosen values of The second is for randomly chosen values of How big should be so that there is a good chance of a match?
Give a classical (that is, not elliptic curve) version of the procedure in part (a).
Let be a point on the elliptic curve mod a prime
Show that there are only finitely many points on so has only finitely many distinct multiples.
Show that there are integers with such that Conclude that
The smallest positive integer such that is called the order of Let be an integer such that Show that divides (Hint: Imitate the proof of Exercise 53(c, d) in Chapter 3.)
(for those who know some group theory) Use Lagrange’s theorem from group theory to show that the number of points on is a multiple of the order of (Combined with Hasse’s theorem, this gives a way of finding the number of points on See Computer Problems 1 and 4.)
Let be a point on the elliptic curve Suppose you know a positive integer such that You want to prove (or disprove) that is the order of
Show that if for some prime factor of then is not the order of
Suppose and Show that for some prime divisor of
Suppose that for each prime factor of Use Exercise 11(c) to show that the order of is (Compare with Exercise 54 in Chapter 3. For an example, see Computer Problem 4.)
Let be an integer written in binary. Let be a point on the elliptic curve Perform the following procedure:
Start with and
If let If let
Let
If stop. If add 1 to and go to step 2.
Show that (Compare with Exercise 56(a) in Chapter 3.)
Let be a positive integer and let be a point on an elliptic curve. Show that the following procedure computes
Start with
If is even, let and let
If is odd, let and let
If go to step 2.
Output
(Compare with Exercise 56(b) in Chapter 3.)
Let be an elliptic curve mod (where is some integer) and let and be points on with The curve and the point are public and are known to everyone. The point is secret. Peggy wants to convince Victor that she knows They do the following procedure:
Peggy chooses a random point on and lets
Peggy computes and and sends to Victor.
Victor checks that
Victor makes a request and Peggy responds.
Victor now does something else.
They repeat steps 1 through 5 several times.
Describe what is done in steps 4 and 5.
Give a classical (non-elliptic curve) version of this protocol that yields a zero-knowledge proof that Peggy knows a solution to
Let be an elliptic curve mod a large prime, let be the number of points on and let and be points on Peggy claims to know an integer such that She wants to prove this to Victor by the following procedure. Victor knows and but he does not know and should receive no information about
Peggy chooses a random integer mod and lets (Don’t worry about why it’s mod It’s for technical reasons.)
Peggy computes and and sends and to Victor.
Victor checks something.
Victor randomly chooses or and asks Peggy for
Peggy sends to Victor.
Victor checks something.
Step (7).
What does Victor check in step (3)?
What does Victor check in step (6)?
What should step (7) be if Victor wants to be at least sure that Peggy knows ?
Here is an elliptic curve version of the Digital Signature Algorithm. Alice wants to sign a message which is an integer. She chooses a prime and an elliptic curve The number of points on is computed and a large prime factor of is found. A point is chosen such that (In fact, is not needed. Choose a point on and find an integer with There are ways of doing this, though it is not easy. Let be a large prime factor of if it exists, and let Then ) It is assumed that the message satisfies Alice chooses her secret integer and computes The public information is Alice does the following:
Chooses a random integer with and computes
Computes
Sends the signed message to Bob
Bob verifies the signature as follows:
Computes and
Computes
Declares the signature valid if
Show that the verification equation holds for a correctly signed message. Where is the fact that used (see the “subtle point” mentioned in the ElGamal scheme in Section 21.5)?
Why does exist?
If is large, why is there very little chance that does not exist mod ? How do we recognize the case when it doesn’t exist? (Of course, in this case, Alice should start over by choosing a new )
How many computations “(large integer)(point on )” are made in the verification process here? How many are made in the verification process for the elliptic ElGamal scheme described in the text? (Compare with the end of Section 13.5.)
Let and be points on an elliptic curve and suppose for some integer Suppose also that for some integer but
Show that if then Therefore, we may assume that
Let be an integer. Show that when is even and when is odd.
Write where each is 0 or 1 (binary expansion of ). Show that if and only if
Suppose that for some we know Let Show that if and only if This allows us to find Continuing in this way, we obtain and therefore we can compute This technique can be extended to the case where where is an integer with only small prime factors. This is the analog of the Pohlig-Hellman algorithm (see Section 10.2).
Let be the elliptic curve
Find the sum
Find the sum
Using the result of part (b), find the difference
Find an integer such that
Show that has exactly 20 distinct multiples, including
Using (e) and Exercise 19(d), show that the number of points on is a multiple of 20. Use Hasse’s theorem to show that has exactly 20 points.
You want to represent the message as a point on the curve Write and find a value of the missing last digit of such that there is a point on the curve with this .
Factor 3900353 using elliptic curves.
Try to factor 3900353 using the method of Section 9.4. Using the knowledge of the prime factors obtained from part (a), explain why the method does not work well for this problem.
Let be a point on the elliptic curve
Show that but and
Use Exercise 20 to show that has order 189.
Use Exercise 19(d) and Hasse’s theorem to show that the elliptic curve has 567 points.
Compute the difference on the elliptic curve Note that the answer involves large integers, even though the original points have small coordinates.
As we pointed out in the previous chapter, elliptic curves have various advantages in cryptosystems based on discrete logs. However, as we’ll see in this chapter, they also open up exciting new vistas for cryptographic applications. The existence of a bilinear pairing on certain elliptic curves is what makes this possible.
First, we’ll describe one of these pairings. Then we’ll give various applications, including identity-based encryption, digital signatures, and encrypted keyword searches.
Although most of this chapter could be done in the context of cyclic groups of prime order, the primary examples of pairings in cryptography are based on elliptic curves or closely related situations. Therefore, for concreteness, we use only the following situation.
Let be a prime of the form where is also prime. Let be the elliptic curve mod We need the following facts about .
There are exactly points on .
There is a point such that . In fact, if we take a random point then, with very high probability, and is a multiple of .
There is a function that maps pairs of points to th roots of unity for all integers . It satisfies the bilinearity property
for all . This implies that
for all points that are multiples of . (See Exercise 2.)
If we are given two points and that are multiples of then can be computed quickly from the coordinates of and .
so it is a nontrivial th root of unity.
We also make the following assumption:
If we are given a random point on and a random th root of unity it is computationally infeasible to find a point on with and it is computationally infeasible to find with .
Properties (1) and (2) are fairly easy to verify (see Exercises 4 and 5). The existence of satisfying (3), (4), (5) is deep. In fact, is a modification of what is known as the Weil pairing in the theory of elliptic curves. The usual Weil pairing satisfies but the present version is modified using special properties of to obtain (5). It is generally assumed that (A) is true if the prime is large enough, but this is not known. See Exercise 10.
The fact that can be computed quickly needs some more explanation. The two points satisfy and for some . However, to find and requires solving a discrete log problem, which could take a long time. Therefore, the obvious solution of choosing a random th root of unity for and then using the bilinearity property to define does not work, since it cannot be computed quickly. Instead, is computed directly in terms of the coordinates of the points .
Although we will not need to know this, the th roots of unity lie in the finite field with elements (see Section 3.11).
For more about the definition of see [Boneh-Franklin] or [Washington].
The curve is an example of a supersingular elliptic curve, namely one where the number of points is congruent to 1 mod . (See Exercise 4.) For a while, these curves were regarded as desirable for cryptographic purposes because computations can be done quickly on them. But then it was shown that the discrete logarithm problem for them is only slightly more difficult than the classical discrete logarithm mod (see Section 22.2), so they fell out of favor (after all, they are slower computationally than simple multiplication mod and they provide no security advantage). Because of the existence of the pairing they have become popular again.
Let be the elliptic curve from Section 22.1. Suppose where is some integer and is the point from Section 22.1. The Elliptic Curve Discrete Log Problem asks us to find . Menezes, Okamoto, and Vanstone showed the following method of reducing this problem to the discrete log problem in the field with elements. Observe that
Therefore, solving the discrete log problem for yields the answer. Note that this latter discrete log problem is not on the elliptic curve, but instead in the finite field with elements. There are analogues of the index calculus for this situation, so usually this is an easier discrete log problem.
For a randomly chosen (not necessarily supersingular) elliptic curve, the method still works in theory. But the values of the pairing usually lie in a field much larger than the field with elements. This slows down the computations enough that the MOV attack is infeasible for most non-supersingular curves.
The MOV attack shows that cryptosystems based on the elliptic curve discrete log problem for supersingular curves gives no substantial advantage over the classical discrete log problem mod a prime. For this reason, supersingular curves were avoided for cryptographic purposes until these curves occurred in applications where pairings needed to be computed quickly, as in the next few sections.
Alice, Bob, and Carlos want to agree on a common key (for a symmetric cryptosystem). All communications among them are public. If there were only two people, Diffie-Hellman could be used. A slight extension of this procedure works for three people:
Alice, Bob, and Carlos agree on a large prime and a primitive root .
Alice chooses a secret integer Bob chooses a secret integer and Carlos chooses a secret integer .
Alice computes Bob computes and Carlos computes .
Alice sends to Bob, Bob sends to Carlos, and Carlos sends to Alice.
Alice computes Bob computes and Carlos computes
Alice sends to Bob, Bob sends to Carlos, and Carlos sends to Alice.
Alice computes Bob computes and Carlos computes Note that
Alice, Bob, and Carlos use some agreed-upon method to obtain keys from For example, they could use some standard hash function and apply it to
This protocol could also be used with and replaced by an elliptic curve and a point so Alice computes etc., and the final result is
In 2000, Joux showed how to use pairings to obtain a more efficient protocol, one in which there is only one round instead of two:
Alice, Bob, and Carlos choose a supersingular elliptic curve and a point as in Section 22.1.
Alice chooses a secret integer Bob chooses a secret integer and Carlos chooses a secret integer
Alice computes Bob computes and Carlos computes
Alice makes public, Bob makes public, and Carlos makes public.
Alice computes Bob computes and Carlos computes Note that each person has computed
Alice, Bob, and Carlos use some agreed-upon method to obtain keys from For example, they could apply some standard hash function to this number.
The eavesdropper Eve sees and the points and needs to compute This computation is called the Bilinear Diffie-Hellman Problem. It is not known how difficult it is. However, if Eve can solve the Computational Diffie-Hellman Problem (see Section 10.4), then she uses to obtain and computes Therefore, the Bilinear Diffie-Hellman Problem is no harder than the Computational Diffie-Hellman Problem.
Joux’s result showed that pairings could be used in a constructive way in cryptography, rather than only in a destructive method such as the MOV attack, and this led to pairings being considered for applications such as those in the next few sections. It also meant that supersingular curves again became useful in cryptography, with the added requirement that when a curve mod is used, the prime must be chosen large enough that the classical discrete logarithm problem (solve for ) is intractable.
In most public key systems, when Alice wants to send a message to Bob, she looks up his public key in a directory and then encrypts her message. However, she needs some type of authentication – perhaps the directory has been modified by Eve, and the public key listed for Bob was actually created by Eve. Alice wants to avoid this situation. It was suggested by Shamir in 1984 that it would be nice to have an identity-based system, where Bob’s public identification information (for example, his email address) serves as his public key. Such a system was finally designed in 2001 by Boneh and Franklin.
Of course, some type of authentication of each user is still needed. In the present system, this occurs in the initial setup of the system during the communications between the Trusted Authority and the User. In the following, we give the basic idea of the system. For more details and improvements, see [Boneh-Franklin].
We need two public hash functions:
maps arbitrary length binary strings to multiples of A little care is needed in defining since no one should be able, given a binary string to find with See Exercise 7.
maps th roots of unity to binary strings of length where is the length of the messages that will be sent. Since must be specified before the system is set up, this limits the lengths of the messages that can be sent. However, the message could be, for example, a DES key that is used to encrypt a much longer message, so this length requirement is not a severe restriction.
To set up the system we need a Trusted Authority. Let’s call him Arthur. Arthur does the following.
He chooses, once and for all, a secret integer He computes which is made public.
For each User, Arthur finds the user’s identification ID (written as a binary string) and computes
Recall that is a point on so is times this point.
Arthur uses a secure channel to send to the user, who keeps it secret. Arthur does not need to store so he discards it.
The system is now ready to operate, but first let’s review what is known:
Public:
Secret: (known only to Arthur), (one for each User; it is known only by that User)
Alice wants to send an email message (of binary length ) to Bob, who is one of the Users. She knows Bob’s address, which is bob@computer.com. This is his ID. Alice does the following.
She computes ((bob@computer.com), ). This is a th root of unity.
She chooses a random and computes
She sends Bob the ciphertext
Note that is a point on and is a binary string of length
If Bob receives a pair where is a point on and is a binary string of length then he does the following.
He computes which is a th root of unity.
He recovers the message as
Why does this yield the message? If the encryption is performed correctly, Bob receives and Since (bob@computer.com),
Therefore,
as desired. Note that the main step is Equation (22.1), which removes the secret from the in the first argument of and puts it on the in the second argument. This follows from the bilinearity property of the function Almost all of the cryptographic uses of pairings have a similar idea of moving from one masking of the secret to another. The pairing allows this to be done without knowing the value of
It is very important that be kept secret. If Eve obtains then she can compute the points for each user and read every email. Since the security of is compromised if Eve can compute discrete logs on the elliptic curve. Moreover, the ciphertext contains If Eve can compute a discrete log and find then she can compute and use this to find and also Therefore, for the security of the system, it is vital that be chosen large enough that discrete logs are computationally infeasible.
Alice wants to sign a document In earlier chapters, we have seen how to do this with RSA and ElGamal signatures. The BLS method, due to Boneh, Lynn, and Schacham, uses pairings.
We use a supersingular elliptic curve and point as in Section 22.1. To set up the signature scheme, we’ll need a public hash function that maps arbitrary length binary strings to multiples of A little care is needed in defining since no one should be able, given a binary string to find with See Exercise 7.
To set up the system, Alice chooses, once and for all, a secret integer and computes which is made public.
Alice’s signature for the message is which is a point on
To verify that is a valid signed message, Bob checks
If this equation holds, Bob says that the signature is valid.
If Alice signs correctly,
so the verification equation holds.
Suppose Eve wants to forge Alice’s signature on a document The values of and are then already determined, so the verification equation says that Eve needs to find satisfying a known quantity. Assumption (A) is Section 22.1 says that (we hope) this is computationally infeasible.
The BLS signature scheme uses a hash function whose values are points on an elliptic curve. This might seem less natural than using a standard hash function with values that are binary strings (that is, numbers). The following method of Zhang, Safavi-Naini, and Susilo remedies this situation. Let be a standard hash function such as SHA-3 or SHA-256 that maps binary strings of arbitrary length to binary strings of fixed length. Regard the output of as an integer. Alice’s key is the same as in BLS, namely, But the signature is computed as
where is the modular multiplicative inverse of mod (where is the order of as in Section 22.1).
The verification equation for the signed message is
Since the left side of the verification equation equals the right side when Alice signs correctly. Again, assumption (A) from Section 22.1 says that it should be hard for Eve to forge Alice’s signature.
When Alice uses one of the above methods or uses RSA or ElGamal signatures to sign a document, and Bob wants to verify the signature, he looks up Alice’s key on a web-page, for example, and uses it in the verification process. This means he must trust the web-page to be correct, not a fake one made by Eve. It would be preferable to use something closely associated with Alice such as her email address as the public key. This is of course the same problem that was solved in the previous section for encyprytion, and similar techniques work in the present situation.
The following method of Hess gives an identity-based signature scheme.
We use a supersingular elliptic curve and point as in Section 22.1. We need two public hash functions:
maps arbitrary length binary strings to multiples of A little care is needed in defining since no one should be able, given a binary string to find with See Exercise 7.
maps binary strings of arbitrary length to binary strings of fixed length; for example, can be a standard hash function such as SHA-3 or SHA-256.
To set up the system, we need a Trusted Authority. Let’s call him Arthur. Arthur does the following.
He chooses, once and for all, a secret integer He computes which is made public.
For each User, Arthur finds the user’s identification ID (written as a binary string) and computes
Recall that is a point on so is times this point.
Arthur uses a secure channel to send to the user, who keeps it secret. Arthur does not need to store so he discards it.
The system is now ready to operate, but first let’s review what is known:
Public:
Secret: (known only to Arthur), (one for each User; it is known only by that User)
To sign Alice does the following.
She chooses a random point on
She computes
She computes
She computes
The signed message is
If Bob receives a triple where is a point on and is a binary string, then he does the following.
He computes which is a th root of unity.
He computes
If then Bob says the signature is valid.
Let’s suppose Alice signed correctly. Then, writing for we have
Also,
Therefore,
so the signature is declared valid.
Suppose Eve has a document and she wants to forge Alice’s signature so that is a valid signature. She cannot choose arbitrarily since Bob is going to compute and see whether Therefore, if is a good hash function, Eve’s best strategy is to choose a value and compute Since is collision resistant and this should equal But is completely determined by Alice’s ID and This means that in order to satisfy the verification equation, Eve must find such that equals a given quantity. Assumption (A) says this should be hard to do.
Alice runs a large corporation. Various employees send in reports containing secret information. These reports need to be sorted and routed to various departments, depending on the subject of the message. Of course, each report could have a set of keywords attached, and the keywords could determine which departments receive the reports. But maybe the keywords are sensitive information, too. Therefore, the keywords need to be encrypted. But if the person who sorts the reports decrypts the keywords, then this person sees the sensitive keywords. It would be good to have the keywords encrypted in a way that an authorized person can search for a certain keyword in a message and determine either that the keyword is present or not, and receive no other information about other keywords. A solution to this problem was found by Boneh, Di Crescenzo, Ostrovsky, and Persiano.
Start with a supersingular elliptic curve and point as in Section 22.1. We need two public hash functions:
maps arbitrary length binary strings to multiples of A little care is needed in defining since no one should be able, given a binary string to find with See Exercise 7.
maps the th roots of unity (in the finite field with elements) to binary strings of fixed length; for example, if the roots of unity are expressed in binary, can be a standard hash function.
Alice sets up the system as follows. Incoming reports (encrypted) will have attachments consisting of encrypted keywords. Each department will have a set of keyword searches that it is allowed to do. If it finds one of its allotted keywords, then it saves the report.
Alice chooses a secret integer and computes
Alice computes
The point is sent via secure channels to each department that is authorized to search for
When Daphne writes a report, she attaches the relevant encrypted keywords to the documents. These encrypted keywords are produced as follows:
Let be a keyword. Daphne chooses a random integer and computes
The encryption of the keyword is the pair
A searcher looks at the encrypted keywords attached to a document and checks
If yes, then the searcher concludes that is a keyword for that document. If no, then is not a keyword for it.
Why does this work? If is the encrypted form of the keyword then
for some Therefore,
so
Suppose conversely, that is another keyword. Is it possible that corresponds to both and ? Since the same value of could be used in the encryptions of and it is possible that occurs in encryptions of both and However, we’ll show that in this case the number comes from at most one of and Since is collision resistant and and we expect that and (See Exercise 1.) Since is collision resistant, we expect that
Therefore, passes the verification equation for at most one keyword
Each time that Daphne encrypts the keyword, a different should be used. Otherwise, the encryption of the keyword will be the same and this can lead to information being leaked. For example, someone could notice that certain keywords occur frequently and make some guesses as to their meanings.
There are many potential applications of this keyword search scheme. For example, suppose a medical researcher wants to find out how many patients at a certain hospital were treated for disease in the previous year. For privacy reasons, the administration does not want the researcher to obtain any other information about the patients, for example, gender, race, age, and other diseases. The administration could give the researcher Then the researcher could search the encrypted medical records for keyword without obtaining any information other than the presence or absence of
Let be the supersingular elliptic curve from Section 22.1.
Let and be multiples of Show that (Hint: Use the fact that is a th root of unity and that if and only if )
Let be a multiple of and let be multiples of Show that if then
Let be the supersingular elliptic curve from Section 22.1.
Show that
for all points that are multiples of
Show that
for all that are multiples of
Let be the supersingular elliptic curve from Section 22.1. Suppose you have points on that are multiples of and are not equal to Let and be two secret integers. Suppose you are given the points and Find a way to use to decide whether or not
Let be prime.
Show that there exists with
Show that if if and only if This shows that every integer has a unique cube root.
Show that has exactly points (including the point ). (Hint: Apply part (b) to ) (Remark: A curve mod whose number of points is congruent to 1 mod is called supersingular.)
(for those who know some group theory)
In the situation of Exercise 4, suppose that with also prime. Show that there exists a point such that
Let as in Exercise 9 in Chapter 21. Show that if then and is a multiple of (For simplicity, assume that )
Let be a hash function that takes a binary string of arbitrary length as input and then outputs an integer mod Let be prime with also prime. Show how to use to construct a hash function that takes a binary string of arbitrary length as input and outputs a point on the elliptic curve that is a multiple of the point of Section 22.1. (Hint: Use the technique of Exercise 4 to find then Then use Exercise 5(b).)
In the identity-based encryption system of Section 22.4, suppose Eve can compute such that (bob@computer.edu) . Show that Eve can compute and therefore read Bob’s messages.
In the BLS signature scheme of Section 22.5.1, suppose Eve can compute such that Show that Eve can compute such that is a valid signed document.
In the identity-based signature scheme of Section 22.5.1, suppose Eve can compute such that Show that Eve can compute and therefore forge Alice’s signature on documents.
In the keyword search scheme of Section 22.6, suppose Eve can compute such that Show that Eve can compute and therefore find the occurrences of encrypted on documents.
Let and be as in Section 22.1. Show that an analogue of the Decision Diffie-Hellman problem can be solved for Namely, if we are given show how we can decide whether
Suppose you try to set up an identity-based cryptosystem as follows. Arthur chooses large primes and and forms which is made public. For each User, he converts the User’s identification to a number by some public method and then computes with Arthur gives to the User. The integer is the same for all users. When Alice wants to send an email to Bob, she uses the public method to convert his email address to and then uses this to encrypt messages with RSA. Bob knows so he can decrypt. Explain why this system is not secure.
You are given a point on the curve of Section 22.1 and you are given a th root of unity Suppose you can solve discrete log problems for the th roots of unity. That is, if and are th roots of unity, you can find so that Show how to find a point on with
Lattices have become an important tool for the cryptanalyst. In this chapter, we give a sampling of some of the techniques. In particular, we use lattice reduction techniques to attack RSA in certain cases. Also, we describe the NTRU public key system and show how it relates to lattices. For a more detailed survey of cryptographic applications of lattices, see [Nguyen-Stern].
Let be linearly independent vectors in -dimensional real space . This means that every -dimensional real vector can be written in the form
with real numbers that are uniquely determined by . The lattice generated by is the set of vectors of the form
where are integers. The set is called a basis of the lattice. A lattice has infinitely many possible bases. For example, suppose is a basis of a lattice. Let be an integer and let and . Then is also a basis of the lattice: Any vector of the form can be written as with and , and similarly any integer linear combination of and can be written as an integer linear combination of and .
Let and . The lattice generated by and is the set of all pairs with integers. Another basis for this lattice is . A third basis is . More generally, if is a matrix with determinant , then is a basis of this lattice (Exercise 4).
The length of a vector is
Many problems can be related to finding a shortest nonzero vector in a lattice. In general, the shortest vector problem is hard to solve, especially when the dimension of the lattice is large. In the following section, we give some methods that work well in small dimensions.
A shortest vector in the lattice generated by
is (another shortest vector is ). How do we find this vector? This is the subject of the next section. For the moment, we verify that is in the lattice by writing
In fact, is a basis of the lattice. For most purposes, this latter basis is much easier to work with than the original basis since the two vectors and are almost orthogonal (their dot product is , which is small). In contrast, the two vectors of the original basis are nearly parallel and have a very large dot product. The methods of the next section show how to replace a basis of a lattice with a new basis whose vectors are almost orthogonal.
Let form the basis of a two-dimensional lattice. Our first goal is to replace this basis with what will be called a reduced basis.
If , then swap and , so we may assume that . Ideally, we would like to replace with a vector perpendicular to . As in the Gram-Schmidt process from linear algebra, the vector
is perpendicular to . But this vector might not lie in the lattice. Instead, let be the closest integer to (for definiteness, take to be the closest integer to , and to be closest to , etc.). Then we replace the basis with the basis
We then repeat the process with this new basis.
We say that the basis is reduced if
The above reduction process stops exactly when we obtain a reduced basis, since this means that .
In the figures, the first basis is reduced because is longer than and the projection of onto is less than half the length of . The second basis is nonreduced because the projection of onto is too long. It is easy to see that a basis is reduced when is at least as long as and lies within the dotted lines of the figures.
Let’s start with and . We have , so we do not swap the two vectors. Since
we take . The new basis is
Swap and and rename the vectors to obtain a basis
We have
so we take . This yields vectors
Swap these and name them and . We have
so . This yields, after a swap,
Since and
the basis is reduced.
A natural question is whether this process always produces a reduced basis. The answer is yes, as we prove in the following theorem. Moreover, the first vector in the reduced basis is a shortest vector for the lattice.
We summarize the discussion in the following.
Let be a basis for a two-dimensional lattice in . Perform the following algorithm:
If , swap and so that .
Let be the closest integer to .
If , stop. If , replace with and return to step 1.
The algorithm stops in finite time and yields a reduced basis of the lattice. The vector is a shortest nonzero vector for the lattice.
Proof
First we prove that the algorithm eventually stops. As in Equation 23.1, let and let . Then
Since and are orthogonal, the Pythagorean theorem yields
Also, since , and again since and are orthogonal,
Note that if then and . Otherwise, . Therefore, if , we have , which implies that
Therefore, if the process continues forever without yielding a reduced basis, then the lengths of the vectors decrease indefinitely. However, there are only finitely many vectors in the lattice that are shorter than the original basis vector . Therefore, the lengths cannot decrease forever, and a reduced basis must be found eventually.
To prove that the vector in a reduced basis is a shortest nonzero vector for the lattice, let be any nonzero vector in the lattice, where and are integers. Then
Because is reduced,
which implies that . Therefore,
since by assumption. Therefore,
But is an integer. Writing it as , we see that it is nonnegative, and it equals 0 if and only if . Since , we must have . Therefore,
so is a shortest nonzero vector.
Lattice reduction in dimensions higher than two is much more difficult. One of the most successful algorithms was invented by A. Lenstra, H. Lenstra, and L. Lovász and is called the LLL algorithm. In many problems, a short vector is needed, and it is not necessary that the vector be the shortest. The LLL algorithm takes this approach and looks for short vectors that are almost as short as possible. This modified approach makes the algorithm run very quickly (in what is known as polynomial time). The algorithm performs calculations similar to those in the two-dimensional case, but the steps are more technical, so we omit details, which can be found in [Cohen], for example. The result is the following.
Let be the -dimensional lattice generated by in . Define the determinant of the lattice to be
(This can be shown to be independent of the choice of basis. It is the volume of the parallelepiped spanned by .) Let be the length of a shortest nonzero vector in . The LLL algorithm produces a basis of satisfying
.
Statement (2) says that is close to being a shortest vector, at least when the dimension is small. Statement (3) says that the new basis vectors are in some sense close to being orthogonal. More precisely, if the vectors are orthogonal, then the volume equals the product . The fact that this product is no more than times says that the vectors are mostly close to orthogonal.
The running time of the LLL algorithm is less than a constant times , where is the dimension and is a bound on the lengths of the original basis vectors. In practice it is much faster than this bound. This estimate shows that the running time is quite good with respect to the size of the vectors, but potentially not efficient when the dimension gets large.
Let’s consider the lattice generated by (31, 59) and (37, 70), which we considered earlier when looking at the two-dimensional algorithm. The LLL algorithm yields the same result, namely and . We have and (given by , for example). The statements of the theorem are
.
Alice wants to send Bob a message of the form
or
In these cases, the message is of the form
for some integer . We’ll present an attack that works when the encryption exponent is small.
Suppose Bob has public RSA key . Then the ciphertext is
We assume that Eve knows , and , so she only needs to find . She forms the polynomial
Eve is looking for such that . In other words, she is looking for a small solution to a polynomial congruence .
Eve applies the LLL algorithm to the lattice generated by the vectors
This yields a new basis , but we need only . The theorem in Subsection 23.2.2 tells us that
We can write
with integers and with
It is easy to see that
Form the polynomial
Then, since the integer satisfies and since the coefficients of and are congruent mod ,
Assume now that
Then
where the last inequality used the Cauchy-Schwarz inequality for dot products (that is, ). Since, by (17.3) and (17.4),
we obtain
Since , we must have . The zeros of may be determined numerically, and we obtain at most three candidates for . Each of these may be tried to see if it gives the correct ciphertext. Therefore, Eve can find .
Note that the above method replaces the problem of finding a solution to the congruence with the exact, non-congruence, equation . Solving a congruence often requires factoring , but solving exact equations can be done by numerical procedures such as Newton’s method.
In exactly the same way, we can find small solutions (if they exist) to a polynomial congruence of degree , using a lattice of dimension . Of course, must be small enough that LLL will run in a reasonable time. Improvements to this method exist. Coppersmith ([Coppersmith2]) gave an algorithm using higher-dimensional lattices that looks for small solutions to a monic (that is, the highest-degree coefficient equals 1) polynomial equation of degree . If , then the algorithm runs in time polynomial in and .
Let
(which happens to be the product of the primes and , but Eve does not know this). Alice is sending the message
where ** denotes a two-digit number. Therefore the message is where and . Suppose Alice sends the ciphertext . Eve forms the polynomial
where
Note that .
Eve uses LLL to find a root of . She lets and forms the vectors
The LLL algorithm produces the vector
Eve then looks at the polynomial
The roots of are computed numerically to be
It is easily checked that , so the plaintext is
Of course, a brute force search through all possibilities for the two-digit number could have been used to find the answer in this case. However, if is taken to be a 200-digit number, then can have around 33 digits. A brute force search would usually not succeed in this situation.
If the dimension is large, say , the LLL algorithm is not effective in finding short vectors. This allows lattices to be used in cryptographic constructions. Several cryptosystems based on lattices have been proposed. One of the most successful current systems is NTRU (rumored to stand for either “Number Theorists aRe Us” or “Number Theorists aRe Useful”). It is a public key system. In the following, we describe the algorithm for transmitting messages using a public key. There is also a related signature scheme, which we won’t discuss. Although the initial description of NTRU does not involve lattices, we’ll see later that it also has a lattice interpretation.
First, we need some preliminaries. Choose an integer . We will work with the set of polynomials of degree less than . Let
be two such polynomials. Define
where
The summation is over all pairs with .
For example, let , let , and let . Then the coefficient of in is
and
From a slightly more advanced viewpoint, is simply multiplication of polynomials mod (see Exercise 5 and Section 3.11).
NTRU works with certain sets of polynomials with small coefficients, so it is convenient to have a notation for them. Let
We can now describe the NTRU algorithm. Alice wants to send a message to Bob, so Bob needs to set up his public key. He chooses three integers with the requirements that and that is much smaller than . Recommended choices are
for moderate security and
for very high security. Of course, these parameters will need to be adjusted as attacks improve. Bob then chooses two secret polynomials and with small coefficients (we’ll say more about how to choose them later). Moreover, should be invertible mod and mod , which means that there exist polynomials and of degree less than such that
Bob calculates
Bob’s public key is
His private key is . Although can be calculated easily from , he should store (secretly) since he will need it in the decryption process. What about ? Since , he is not losing information by not storing it (and he does not need it in decryption).
Alice can now send her message. She represents the message, by some prearranged procedure, as a polynomial of degree less than with coefficients of absolute value at most . When , this means that has coefficients . Alice then chooses a small polynomial (“small” will be made more precise shortly) and computes
She sends the ciphertext to Bob.
Bob decrypts by first computing
with all coefficients of the polynomial of absolute value at most , then (usually) recovering the message as
Why should this work? In fact, sometimes it doesn’t, but experiments with the parameter choices given below indicate that the probability of decryption errors is less than . But here is why the decryption is usually correct. We have
Since , , , have small coefficients and is much smaller than , it is very probable that , before reducing mod , has coefficients of absolute value less than . In this case, we have equality
Then
so the decryption works.
For , the recommended choices for are
(recall that this means that the coefficients of are fifteen 1s, fourteen s, and the remaining 78 coefficients are 0).
For , the recommended choices for are
With these choices of parameters, the polynomials , , are small enough that the decryption works with very high probability.
The reason has a different number of 1s and s is so that . It can be shown that if , then cannot be invertible.
Let (this choice of is much too small for any security; we use it only in order to give an explicit example). Take and . Since
we have
Also,
Bob’s public key is
His private key is
Alice takes her message to be . She chooses . Then the ciphertext is
Bob decrypts by first computing
then
Therefore, Bob has obtained the message.
Let . Form the matrix
If we represent and by the row vectors
then we see that .
Let be the identity matrix. Form the matrix
Let be the lattice generated by the rows of . Since , we can write for some polynomial . Represent as an -dimensional row vector , so is a -dimensional row vector. Then
so is in the lattice (see Exercise 3). Since and have small coefficients, is a small vector in the lattice . Therefore, the secret information for the key can be represented as a short vector in a lattice. An attacker can try to apply a lattice reduction algorithm to find short vectors, and possibly obtain . Once the attacker has found and , the system is broken.
To stop lattice attacks, we need to make the lattice have high enough dimension that lattice reduction algorithms are inefficient. This is easily achieved by making sufficiently large. However, if is too large, the encryption and decryption algorithms become slow. The suggested values of were chosen to achieve security while keeping the cryptographic algorithms efficient.
Lattice reduction methods have the best success when the shortest vector is small (more precisely, small when compared to the th root of the determinant of the -dimensional lattice). Improvements in the above lattice attack can be obtained by replacing in the upper left block of by for a suitably chosen real number . This makes the resulting short vector comparatively shorter and thus easier to find. The parameters in NTRU, especially the sizes of and , have been chosen so as to limit the effect of these lattice attacks.
So far, the NTRU cryptosystem appears to be strong; however, as with many new cryptosystems, the security is still being studied. If no successful attacks are found, NTRU will have the advantage of providing security comparable to RSA and other public key methods, but with smaller key size and with faster encryption and decryption times.
Another lattice-based public key cryptosystem was developed by Goldreich, Goldwasser, and Halevi in 1997. Let be a 300-dimensional lattice that is a subset of the points with integral coordinates in 300-dimensional real space .
The private key is a “good” basis of , given by the columns of a matrix , where “good” means that the entries of are small.
The public key is a “bad basis” of , given by the columns of a matrix , where is a secret matrix with integral entries and with determinant 1. The determinant condition implies that the entries of are also integers. “Bad” means that has many large entries.
A message is a 300-dimensional vector with integral entries, which is encrypted to obtain the ciphertext
where is a 300-dimensional vector whose entries are chosen randomly from .
The decryption is carried out by computing
where for a vector means we round off each entry to the nearest integer (and goes whichever way you specify). Why does this decryption work? First, , so
Since and have integral entries, so does the . Since is good, the entries of tend to be small fractions, so they disappear in the rounding. Therefore, probably equals , so
To keep things simple, we’ll work in two-dimensional, rather than 300-dimensional, space. Let
and
Then
To decrypt, we first compute
Therefore, the decryption is
Suppose, instead, that we tried to decrypt by computing ? In the present example,
and
This rounds off to , which is nowhere close to the original message. The problem is that the entries of are much larger than those of , so the small error introduced by is amplified by .
Attacking this system involves the Closest Vector Problem: Given a point in , find the point in closest to .
We have , and is close to since it is moved off the lattice by the small vector .
For general lattices, the Closest Vector Problem is very hard. But it seems to be easier if the point is very close to a lattice point, which is the case in this cryptosystem. So the actual level of security is not yet clear.
If a quantum computer is built (see Chapter 25), cryptosystems based on factorization or discrete logs will become less secure. An active area of current research involves designing systems that cannot be broken by a quantum computer. Some of the most promising candidates seem to be lattice-based systems since their security does not depend on the difficulty of computing discrete logs or factoring, and no attack with a quantum computer has been found. Similarly, the McEliece cryptosystem, which is based on error-correcting codes (see Section 24.10) and is similar to the system in Section 23.5, seems to be a possibility.
One of the potential difficulties with using many of these lattice-based systems is the key size: In the system in Section 23.5, the public key requires integer entries. Many of the entries of should be large, so let’s say that we use 100 bits to specify each one. This means that the key requires bits, much more than is used in current public key cryptosystems.
For more on this subject, see [Bernstein et al.].
Find a reduced basis and a shortest nonzero vector in the lattice generated by the vectors .
Find a reduced basis for the lattice generated by the vectors , .
Find the vector in the lattice of part (a) that is closest to the vector . (Remark: This is an example of the closest vector problem. It is fairly easy to solve when a reduced basis is known, but difficult in general. For cryptosystems based on the closest vector problem, see [Nguyen-Stern].)
Let be linearly independent row vectors in . Form the matrix whose rows are the vectors . Let be a row by , and show that every vector in the lattice can be written in this way.
Let be a basis of a lattice. Let be integers with , and let
Show that
Show that is also a basis of the lattice.
Let be a positive integer.
Show that if , then is a multiple of .
Let . Let be integers and let
where the sum is over pairs with . Show that
is a multiple of .
Let and be polynomials of degree less than . Let be the usual product of and and let be defined as in Section 23.4. Show that is a multiple of .
Let and be positive integers. Suppose that there is a polynomial such that . Show that . (Hint: Use Exercise 5(c).)
In the NTRU cryptosystem, suppose we ignore and let . Show how an attacker can obtain the message quickly.
In the NTRU cryptosystem, suppose is a multiple of . Show how an attacker can obtain the message quickly.
In a good cryptographic system, changing one bit in the ciphertext changes enough bits in the corresponding plaintext to make it unreadable. Therefore, we need a way of detecting and correcting errors that could occur when ciphertext is transmitted.
Many noncryptographic situations also require error correction; for example, fax machines, computer hard drives, CD players, and anything that works with digitally represented data. Error correcting codes solve this problem.
Though coding theory (communication over noisy channels) is technically not part of cryptology (communication over nonsecure channels), in Section 24.10 we describe how error correcting codes can be used to construct a public key cryptosystem.
All communication channels contain some degree of noise, namely interference caused by various sources such as neighboring channels, electric impulses, deterioration of the equipment, etc. This noise can interfere with data transmission. Just as holding a conversation in a noisy room becomes more difficult as the noise becomes louder, so too does data transmission become more difficult as the communication channel becomes noisier. In order to hold a conversation in a loud room, you either raise your voice, or you are forced to repeat yourself. The second method is the one that will concern us; namely, we need to add some redundancy to the transmission in order for the recipient to be able to reconstruct the message. In the following, we give several examples of techniques that can be used. In each case, the symbols in the original message are replaced by codewords that have some redundancy built into them.
Consider an alphabet . We want to send a letter across a noisy channel that has a probability of error. If we want to send , for example, then there is a 90% chance that the symbol received is . This leaves too large a chance of error. Instead, we repeat the symbol three times, thus sending . Suppose an error occurs and the received word is . We take the symbol that occurs most frequently as the message, namely . The probability of the correct message being found is the probability that all three letters are correct plus the probability that exactly one of the three letters is wrong:
which leaves a significantly smaller chance of error.
Two of the most important concepts for codes are error detection and error correction. If there are at most two errors, this repetition code allows us to detect that errors have occurred. If the received message is , then there could be either one error from or two errors from ; we cannot tell which. If at most one error has occurred, then we can correct the error and deduce that the message was . Note that if we used only two repetitions instead of three, we could detect the existence of one error, but we could not correct it (did come from or ?).
This example was chosen to point out that error correcting codes can use arbitrary sets of symbols. Typically, however, the symbols that are used are mathematical objects such as integers mod a prime or binary strings. For example, we can replace the letters by 2-bit strings: 00, 01, 10, 11. The preceding procedure (repeating three times) then gives us the codewords
Suppose we want to send a message of seven bits. Add an eighth bit so that the number of nonzero bits is even. For example, the message 0110010 becomes 01100101, and the message 1100110 becomes 11001100. An error of one bit during transmission is immediately discovered since the message received will have an odd number of nonzero bits. However, it is impossible to tell which bit is incorrect, since an error in any bit could have yielded the odd number of nonzero bits. When an error is detected, the best thing to do is resend the message.
The parity check code of the previous example can be used to design a code that can correct an error of one bit. The two-dimensional parity code arranges the data into a two-dimensional array, and then parity bits are computed along each row and column.
To demonstrate the code, suppose we want to encode the 20 data bits . We arrange the bits into a matrix
and calculate the parity bits along the rows and columns. We define the last bit in the lower right corner of the extended matrix by calculating the parity of the parity bits that were calculated along the columns. This results in the matrix
Suppose that this extended matrix of bits is transmitted and that a bit error occurs at the bit in the third row and fourth column. The receiver arranges the received bits into a matrix and obtains
The parities of the third row and fourth column are odd, so this locates the error as occurring at the third row and fourth column.
If two errors occur, this code can detect their existence. For example, if bit errors occur at the second and third bits of the second row, then the parity checks of the second and third columns will indicate the existence of two bit errors. However, in this case it is not possible to correct the errors, since there are several possible locations for them. For example, if the second and third bits of the fifth row were incorrect instead, then the parity checks would be the same as when these errors occurred in the second row.
The original message consists of blocks of four binary bits. These are replaced by codewords, which are blocks of seven bits, by multiplying (mod 2) on the right by the matrix
For example, the message becomes
Since the first four columns of are the identity matrix, the first four entries of the output are the original message. The remaining three bits provide the redundancy that allows error detection and correction. In fact, as we’ll see, we can easily correct an error if it affects only one of the seven bits in a codeword.
Suppose, for example, that the codeword 1100011 is sent but is received as 1100001. How do we detect and correct the error? Write in the form , where is a matrix. Form the matrix , where is the transpose of . Multiply the message received times the transpose of :
This is the 6th row of , which means there was an error in the 6th bit of the message received. Therefore, the correct codeword was 1100011. The first four bits give the original message 1100. If there had been no errors, the result of multiplying by would have been , so we would have recognized that no correction was needed. This rather mysterious procedure will be explained when we discuss Hamming codes in Section 24.5. For the moment, note that it allows us to correct errors of one bit fairly efficiently.
The Hamming [7, 4] code is a significant improvement over the repetition code. In the Hamming code, if we want to send four bits of information, we transmit seven bits. Up to two errors can be detected and up to one error can be corrected. For a repetition code to achieve this level of error detection and correction, we need to transmit 12 bits in order to send a 4-bit message. Later, we’ll express this mathematically by saying that the code rate of this Hamming code is 4/7, while the code rate of the repetition code is . Generally, a higher code rate is better, as long as not too much error correcting capability is lost. For example, sending a 4-bit message as itself has a code rate of 1 but is unsatisfactory in most situations since there is no error correction capability.
The International Standard Book Number (ISBN) provides another example of an error detecting code. The ISBN is a -digit codeword that is assigned to each book when it is published. For example, the first edition of this book had ISBN number 0-13-061814-4. The first digit represents the language that is used; indicates English. The next two digits represent the publisher. For example, is associated with Pearson/Prentice Hall. The next six numbers correspond to a book identity number that is assigned by the publisher. The tenth digit is chosen so that the ISBN number satisfies
Notice that the equation is done modulo . The first nine numbers are taken from but may be , in which case it is represented by the symbol . Books published in 2007 or later use a 13-digit ISBN, which uses a slightly different sum and works mod 10.
Suppose that the ISBN number is sent over a noisy channel, or is written on a book order form, and is received as . The ISBN code can detect a single error, or a double error that occurs due to the transposition of the digits. To accomplish this, the receiver calculates the weighted checksum
If , then we do not detect any errors, though there is a small chance that an error occurred and was undetected. Otherwise, we have detected an error. However, we cannot correct it (see Exercise 2).
If is the same as except in one place , we may write where . Calculating gives
Thus, if a single error occurs, we can detect it. The other type of error that can be reliably detected is when and have been transposed. This is one of the most common errors that occur when someone is copying numbers. In this case and . Calculating gives
If , then the checksum is not equal to , and an error is detected.
This code was used by the Mariner spacecraft in 1969 as it sent pictures back to Earth. There are 64 codewords; 32 are represented by the rows of the matrix
The matrix is constructed as follows. Number the rows and columns from 0 to 31. To obtain the entry in the th row and th column, write and in binary. Then
For example, when and , we have and . Therefore, .
The other 32 codewords are obtained by using the rows of . Note that the dot product of any two rows of is 0, unless the two rows are equal, in which case the dot product is 32.
When Mariner sent a picture, each pixel had a darkness given by a 6-bit number. This was changed to one of the 64 codewords and transmitted. A received message (that is, a string of 1s and s of length 32) can be decoded (that is, corrected to a codeword) as follows. Take the dot product of the message with each row of . If the message is correct, it will have dot product 0 with all rows except one, and it will have dot product with that row. If the dot product is 32, the codeword is that row of . If it is , the codeword is the corresponding row of . If the message has one error, the dot products will all be , except for one, which will be . This again gives the correct row of or . If there are two errors, the dot products will all be , except for one, which will be , or . Continuing, we see that if there are seven errors, all the dot products will be between and , except for one between and or between and , which yields the correct codeword. With eight or more errors, the dot products start overlapping, so correction is not possible. However, detection is possible for up to 15 errors, since it takes 16 errors to change one codeword to another.
This code has a relatively low code rate of , since it uses 32 bits to send a 6-bit message. However, this is balanced by a high error correction rate. Since the messages from Mariner were fairly weak, the potential for errors was high, so high error correction capability was needed. The other option would have been to increase the strength of the signal and use a code with a higher code rate and less error correction. The transmission would have taken less time and therefore potentially have used less energy. However, in this case, it turned out that using a weaker signal more than offset the loss in speed. This issue (technically known as coding gain) is an important engineering consideration in the choice of which code to use in a given application.
A sender starts with a message and encodes it to obtain codewords consisting of sequences of symbols. These are transmitted over a noisy channel, depicted in Figure 24.1, to the receiver. Often the sequences of symbols that are received contain errors and therefore might not be codewords. The receiver must decode, which means correct the errors in order to change what is received back to codewords and then recover the original message.
The symbols used to construct the codewords belong to an alphabet. When the alphabet consists of the binary bits 0 and 1, the code is called a binary code. A code that uses sequences of three symbols, often represented as integers mod 3, is called a ternary code. In general, a code that uses an alphabet consisting of symbols is called a -ary code.
Let be an alphabet and let denote the set of -tuples of elements of . A code of length is a nonempty subset of .
The -tuples that make up a code are called codewords, or code vectors. For example, in a binary repetition code where each symbol is repeated three times, the alphabet is the set and the code is the set .
Strictly speaking, the codes in the definition are called block codes. Other codes exist where the codewords can have varying lengths. These will be mentioned briefly at the end of this chapter, but otherwise we focus only on block codes.
For a code that is a random subset of , decoding could be a time-consuming procedure. Therefore, most useful codes are subsets of satisfying additional conditions. The most common is to require that be a finite field, so that is a vector space, and require that the code be a subspace of this vector space. Such codes are called linear and will be discussed in Section 24.4.
For the rest of this section, however, we work with arbitrary, possibly nonlinear, codes. We always assume that our codewords are -dimensional vectors.
In order to decode, it will be useful to put a measure on how close two vectors are to each other. This is provided by the Hamming distance. Let be two vectors in . The Hamming distance is the number of places where the two vectors differ. For example, if we use binary vectors and have the vectors and , then and differ in two places (the 4th and the 7th) so . As another example, suppose we are working with the usual English alphabet. Then since the two strings differ in four places.
The importance of the Hamming distance is that it measures the minimum number of “errors” needed for to be changed to . The following gives some of its basic properties.
is a metric on , which means that it satisfies
, and if and only if
for all
for all .
The third property is often called the triangle inequality.
Proof. (1) is exactly the same as saying that and differ in no places, which means that . Part (2) is obvious. For part (3), observe that if and differ in a place, then either and differ at that place, or and differ at that place, or both. Therefore, the number of places where and differ is less than or equal to the number of places where and differ, plus the number of places where and differ.
For a code , one can calculate the Hamming distance between any two distinct codewords. Out of this table of distances, there is a minimum value , which is called the minimum distance of . In other words,
The minimum distance of is very important number, since it gives the smallest number of errors needed to change one codeword into another.
When a codeword is transmitted over a noisy channel, errors are introduced into some of the entries of the vector. We correct these errors by finding the codeword whose Hamming distance from the received vector is as small as possible. In other words, we change the received vector to a codeword by changing the fewest places possible. This is called nearest neighbor decoding.
We say that a code can detect up to errors if changing a codeword in at most places cannot change it to another codeword. The code can correct up to errors if, whenever changes are made at or fewer places in a codeword , then the closest codeword is still . This definition says nothing about an efficient algorithm for correcting the errors. It simply requires that nearest neighbor decoding gives the correct answer when there are at most errors. An important result from the theory of error correcting codes is the following.
1. A code can detect up to errors if . 2. A code can correct up to errors if .
Proof.
Suppose that . If a codeword is sent and or fewer errors occur, then the received message cannot be a different codeword. Hence, an error is detected.
Suppose that . Assume that the codeword is sent and the received word has or fewer errors; that is, . If is any other codeword besides , we claim that . To see this, suppose that . Then, by applying the triangle inequality, we have
This is a contradiction, so . Since has or fewer errors, nearest neighbor decoding successfully decodes to .
How does one find the nearest neighbor? One way is to calculate the distance between the received message and each of the codewords, then select the codeword with the smallest Hamming distance. In practice, this is impractical for large codes. In general, the problem of decoding is challenging, and considerable research effort is devoted to looking for fast decoding algorithms. In later sections, we’ll discuss a few decoding techniques that have been developed for special classes of codes.
Before continuing, it is convenient to introduce some notation.
A code of length , with codewords, and with minimum distance , is called an code.
When we discuss linear codes, we’ll have a similar notation, namely, an code. Note that this latter notation uses square brackets, while the present one uses curved parentheses. (These two similar notations cause less confusion than one might expect!) The relation is that an binary linear code is an code.
The binary repetition code is a code. The Hadamard code of Exercise 6, Section 24.1, is a code (it could correct up to 7 errors because ).
If we have a -ary code, then we define the code rate, or information rate, by
For example, for the Hadamard code, . The code rate represents the ratio of the number of input data symbols to the number of transmitted code symbols. It is an important parameter to consider when implementing real-world systems, as it represents what fraction of the bandwidth is being used to transmit actual data. The code rate was mentioned in Examples 4 and 6 in Section 24.1. A few limitations on the code rate will be discussed in Section 24.3.
Given a code, it is possible to construct other codes that are essentially the same. Suppose that we have a codeword that is expressed as . Then we may define a positional permutation of by permuting the order of the entries of . For example, the new vector is a positional permutation of . Another type of operation that can be done is a symbol permutation. Suppose that we have a permutation of the -ary symbols. Then we may fix a position and apply this permutation of symbols to that fixed position for every codeword. For example, suppose that we have the following permutation of the ternary symbols , and that we have the following codewords: , , and . Then applying the permutation to the second position of all of the codewords gives the following vectors: , , and .
Formally, we say that two codes are equivalent if one code can be obtained from the other by a series of the following operations:
Permuting the positions of the code
Permuting the symbols appearing in a fixed position of all codewords
It is easy to see that all codes equivalent to an code are also codes. However, for certain choices of , there can be several inequivalent codes.
We have shown that an code can correct errors if . Hence, we would like the minimum distance to be large so that we can correct as many errors as possible. But we also would like for to be large so that the code rate will be as close to as possible. This would allow us to use bandwidth efficiently when transmitting messages over noisy channels. Unfortunately, increasing tends to increase or decrease .
In this section, we study the restrictions on , , and without worrying about practical aspects such as whether the codes with good parameters have efficient decoding algorithms. It is still useful to have results such as the ones we’ll discuss since they give us some idea of how good an actual code is, compared to the theoretical limits.
First, we treat upper bounds for in terms of and . Then we show that there exist codes with larger than certain lower bounds. Finally, we see how some of our examples compare with these bounds.
Our first result was given by R. Singleton in 1964 and is known as the Singleton bound.
Let be a -ary code. Then
Proof. For a codeword , let . If are two codewords, then they differ in at least places. Since and are obtained by removing entries from and , they must differ in at least one place, so . Therefore, the number of codewords equals the number of vectors obtained in this way. There are at most vectors since there are positions in these vectors. This implies that , as desired.
The code rate of a -ary code is at most .
Proof. The corollary follows immediately from the definition of code rate.
The corollary implies that if the relative minimum distance is large, the code rate is forced to be small.
A code that satisfies the Singleton bound with equality is called an MDS code (maximum distance separable). The Singleton bound can be rewritten as , so an MDS code has the largest possible value of for a given and . The Reed-Solomon codes (Section 24.9) are an important class of MDS codes.
Before deriving another upper bound, we need to introduce a geometric interpretation that is useful in error correction. A Hamming sphere of radius centered at a codeword is denoted by and is defined to be all vectors that are at most a Hamming distance of from the codeword . That is, a vector belongs to the Hamming sphere if . We calculate the number of vectors in in the following lemma.
A sphere in -dimensional -ary space has
elements.
Proof. First we calculate the number of vectors that are a distance from . These vectors are the ones that differ from in exactly one location. There are possible locations and ways to make an entry different. Thus the number of vectors that have a Hamming distance of from is . Now let’s calculate the number of vectors that have Hamming distance from . There are ways in which we can choose locations to differ from the values of . For each of these locations, there are choices for symbols different from the corresponding symbol from . Hence, there are
vectors that have a Hamming distance of from . Including the vector itself, and using the identity , we get the result:
We may now state the Hamming bound, which is also called the sphere packing bound.
Let be a -ary code with . Then
Proof. Around each codeword we place a Hamming sphere of radius . Since the minimum distance of the code is , these spheres do not overlap. The total number of vectors in all of the Hamming spheres cannot be greater than . Thus, we get
This yields the desired inequality for .
An code with that satisfies the Hamming bound with equality is called a perfect code. A perfect -error correcting code is one such that the Hamming spheres of radius with centers at the codewords cover the entire space of -ary -tuples. The Hamming codes (Section 24.5) and the Golay code (Section 24.6) are perfect. Other examples of perfect codes are the trivial code obtained by taking all -tuples, and the binary repetition codes of odd length (Exercise 15).
Perfect codes have been studied a lot, and they are interesting from many viewpoints. The complete list of perfect codes is now known. It includes the preceding examples, plus a ternary code constructed by Golay. We leave the reader a caveat. A name like perfect codes might lead one to assume that perfect codes are the best error correcting codes. This, however, is not true, as there are error correcting codes, such as Reed-Solomon codes, that are not perfect codes yet have better error correcting capabilities for certain situations than perfect codes.
One of the problems central to the theory of error correcting codes is to find the largest code of a given length and given minimum distance . This leads to the following definition.
Let the alphabet have elements. Given and with , the largest such that an code exists is denoted .
We can always find at least one code: Fix an element of . Let be the set of all vectors (with copies of and copies of ) with . There are such vectors, and they are at distance from each other, so we have an code. This gives the trivial lower bound . We’ll obtain much better bounds later.
It is easy to see that : When a code has minimum distance , we can take the code to be all -ary -tuples. At the other extreme, (Exercise 7).
The following lower bound, known as the Gilbert-Varshamov bound, was discovered in the 1950s.
Given with , there exists a -ary code with
This means that
Proof. Start with a vector and remove all vectors in (where is an alphabet with symbols) that are in a Hamming sphere of radius about that vector. Now choose another vector from those that remain. Since all vectors with distance at most from have been removed, . Now remove all vectors that have distance at most from , and choose from those that remain. We cannot have or , since all vectors satisfying these inequalities have been removed. Therefore, for . Continuing in this way, choose , until there are no more vectors.
The selection of a vector removes at most
vectors from the space. If we have chosen vectors , then we have removed at most
vectors, by the preceding lemma. We can continue until all vectors are removed, which means we can continue at least until
Therefore, there exists a code with satisfying the preceding inequality.
Since is the largest such , it also satisfies the inequality.
There is one minor technicality that should be mentioned. We actually have constructed an code with . However, by modifying a few entries of if necessary, we can arrange that . The remaining vectors are then chosen by the above procedure. This produces a code where the minimal distance is exactly .
If we want to send codewords with bits over a noisy channel, and there is a probability that any given bit will be corrupted, then we expect the number of errors to be approximately when is large. Therefore, we need an code with . We therefore need to consider codes with , for some given . How does this affect and the code rate?
Here is what happens. Fix and choose with . The asymptotic Gilbert-Varshamov bound says that there is a sequence of -ary codes with and such that the code rate approaches a limit , where
The graph of is as in Figure 24.2. Of course, we would like to have codes with high error correction (that is, high ), and with high code rate (). The asymptotic result says that there are codes with error correction and code rate good enough to lie arbitrarily close to, or above, the graph.
The existence of certain sequences of codes having code rate limit strictly larger than (for certain and ) was proved in 1982 by Tsfasman, Vladut, and Zink using Goppa codes arising from algebraic geometry.
Consider the binary repetition code of length with the two vectors and . It is a code. The Singleton bound says that , so is an MDS code. The Hamming bound says that
so is also perfect. The Gilbert-Varshamov bound says that there exists a binary code with
which means .
The Hamming code has and , so it is a code. The Singleton bound says that , so it is not an MDS code. The Hamming bound says that
so the code is perfect. The Gilbert-Varshamov bound says that there exists a code with
so the Hamming code is much better than this lower bound. Codes that have efficient error correction algorithms and also exceed the Gilbert-Varshamov bound are currently relatively rare.
The Hadamard code from Section 24.1 is a binary (because there are two symbols) code. The Singleton bound says that , so it is not very sharp in this case. The Hamming bound says that
The Gilbert-Varshamov bound says there exists a binary code with
When you are having a conversation with a friend over a cellular phone, your voice is turned into digital data that has an error correcting code applied to it before it is sent. When your friend receives the data, the errors in transmission must be accounted for by decoding the error correcting code. Only after decoding are the data turned into sound that represents your voice.
The amount of delay it takes for a packet of data to be decoded is critical in such an application. If it took several seconds, then the delay would become aggravating and make holding a conversation difficult.
The problem of efficiently decoding a code is therefore of critical importance. In order to decode quickly, it is helpful to have some structure in the code rather than taking the code to be a random subset of . This is one of the primary reasons for studying linear codes. For the remainder of this chapter, we restrict our attention to linear codes.
Henceforth, the alphabet will be a finite field . For an introduction to finite fields, see Section 3.11. For much of what we do, the reader can assume that is = the integers mod 2, in which case we are working with binary vectors. Another concrete example of a finite field is = the integers mod a prime . For other examples, see Section 3.11. In particular, as is pointed out there, must be one of the finite fields ; but the present notation is more compact. Since we are working with arbitrary finite fields, we’ll use “=" instead of “" in our equations. If you want to think of as being , just replace all equalities between elements of with congruences mod 2.
The set of -dimensional vectors with entries in is denoted by . They form a vector space over . Recall that a subspace of is a nonempty subset that is closed under linear combinations, which means that if are in and are in , then . By taking , for example, we see that .
A linear code of dimension and length over a field is a -dimensional subspace of . Such a code is called an code. If the minimum distance of the code is , then the code is called an code.
When , the definition can be given more simply. A binary code of length and dimension is a set of binary -tuples (the codewords) such that the sum of any two codewords is always a codeword.
Many of the codes we have met are linear codes. For example, the binary repetition code is a one-dimensional subspace of . The parity check code from Exercise 2 in Section 24.1 is a linear code of dimension 7 and length 8. It consists of those binary vectors of length 8 such that the sum of the entries is 0 mod 2. It is not hard to show that the set of such vectors forms a subspace. The vectors
form a basis of this subspace. Since there are seven basis vectors, the subspace is seven-dimensional.
The Hamming [7, 4] code from Example 4 of Section 24.1 is a linear code of dimension 4 and length 7. Every codeword is a linear combination of the four rows of the matrix . Since these four rows span the code and are linearly independent, they form a basis.
The ISBN code (Example 5 of Section 24.1) is not linear. It consists of a set of 10-dimensional vectors with entries in . However, it is not closed under linear combinations since is not allowed as one of the first nine entries.
Let be a linear code of dimension over a field . If has elements, then has elements. This may be seen as follows. There is a basis of with elements; call them . Every element of can be written uniquely in the form , with . There are choices for each and there are numbers . This means there are elements of , as claimed. Therefore, an linear code is an code in the notation of Section 24.2.
For an arbitrary, possibly nonlinear, code, computing the minimum distance could require computing for every pair of codewords. For a linear code, the computation is much easier. Define the Hamming weight of a vector to be the number of nonzero places in . It equals , where 0 denotes the vector .
Let be a linear code. Then equals the smallest Hamming weight of all nonzero code vectors: .
Proof. Since is the distance between two codewords, we have for all codewords . It remains to show that there is a codeword with weight equal to . Note that for any two vectors . This is because an entry of is nonzero, and hence gets counted in , if and only if and differ in that entry. Choose and to be distinct codewords such that . Then , so the minimum weight of the nonzero codewords equals .
To construct a linear code, we have to construct a -dimensional subspace of . The easiest way to do this is to choose linearly independent vectors and take their span. This can be done by choosing a generating matrix of rank , with entries in . The set of vectors of the form , where runs through all row vectors in , then gives the subspace.
For our purposes, we’ll usually take , where is the identity matrix and is a matrix. The rows of are the basis for a -dimensional subspace of the space of all vectors of length . This subspace is our linear code . In other words, every codeword is uniquely expressible as a linear combination of rows of . If we use a matrix to construct a code, the first columns determine the codewords. The remaining columns provide the redundancy.
The code in the second half of Example 1, Section 24.1, has
The codewords 101010 and 010101 appear as rows in the matrix and the codeword 111111 is the sum of these two rows. This is a code.
The code in Example 2 has
For example, the codeword 11001001 is the sum mod 2 of the first, second, and fifth rows, and hence is obtained by multiplying times . This is an code.
In Exercise 4, the matrix is given in the description of the code. As you can guess from its name, it is a code.
As mentioned previously, we could start with any matrix of rank . Its rows would generate an code. However, row and column operations can be used to transform the matrix to the form of we are using, so we usually do not work with the more general situation. A code described by a matrix as before is said to be systematic. In this case, the first bits are the information symbols and the last symbols are the check symbols.
Suppose we have as the generating matrix for a code . Let
where is the transpose of . In Exercise 4 of Section 24.1, this is the matrix that was used to correct errors. For Exercise 2, we have . Note that in this case a binary string is a codeword if and only if the number of nonzero bits is even, which is the same as saying that its dot product with is zero. This can be rewritten as , where is the transpose of .
More generally, suppose we have a linear code . A matrix is called a parity check matrix for if has the property that a vector is in if and only if . We have the following useful result.
If is the generating matrix for a code , then is a parity check matrix for .
Proof. Consider the th row of , which has the form
where the 1 is in the th position. This is a vector of the code . The th column of is the vector
where the 1 is in the th position. To obtain the th element of , take the dot product of these two vectors, which yields
Therefore, annihilates every row of . Since every element of is a sum of rows of , we find that for all .
Recall the following fact from linear algebra: The left null space of an matrix of rank has dimension . Since contains as a submatrix, it has rank . Therefore, its left null space has dimension . But we have just proved that is contained in this null space. Since also has dimension , it must equal the null space, which is what the theorem claims.
We now have a way of detecting errors: If is received during a transmission and , then there is an error. If , we cannot conclude that there is no error, but we do know that is a codeword. Since it is more likely that no errors occurred than enough errors occurred to change one codeword into another codeword, the best guess is that an error did not occur.
We can also use a parity check matrix to make the task of decoding easier. First, let’s look at an example.
Let be the binary linear code with generator matrix
We are going to make a table of all binary vectors of length 4 according to the following procedure. First, list the four elements of the code in the first row, starting with . Then, among the 12 remaining vectors, choose one of smallest weight (there might be several choices). Add this vector to the first row to obtain the second row. From the remaining eight vectors, again choose one with smallest weight and add it to the first row to obtain the third row. Finally, choose a vector with smallest weight from the remaining four vectors, add it to the first row, and obtain the fourth row. We obtain the following:
This can be used as a decoding table. When we receive a vector, find it in the table. Decode by changing the vector to the one at the top of its column. The error that is removed is first element of its row. For example, suppose we receive . It is the last element of the second row. Decode it to , which means removing the error . In this small example, this is not exactly the same as nearest neighbor decoding, since decodes as when it has an equally close neighbor . The problem is that the minimum distance of the code is 2, so general error correction is not possible. However, if we use a code that can correct up to errors, this procedure correctly decodes all vectors that are distance at most from a codeword.
In a large example, finding the vector in the table can be tedious. In fact, writing the table can be rather difficult (that’s why we used such a small example). This is where a parity check matrix comes to the rescue.
The first vector in a row is called the coset leader. Let be any vector in the same row as . Then for some codeword , since this is how the table was constructed. Therefore,
since by the definition of a parity check matrix. The vector is called the syndrome of . What we have shown is that two vectors in the same row have the same syndrome. Replace the preceding table with the following much smaller table.
| Coset Leader | Syndrome |
This table may be used for decoding as follows. For a received vector , calculate its syndrome . Find this syndrome on the list and subtract the corresponding coset leader from . This gives the same decoding as above. For example, if , then
This is the syndrome for the second row. Subtract the coset leader from to obtain the codeword .
We now consider the general situation. The method of the example leads us to two definitions.
Let be a linear code and let be an -dimensional vector. The set given by
is called a coset of .
It is easy to see that if , then the sets and are the same (Exercise 9).
A vector having minimum Hamming weight in a coset is called a coset leader.
The syndrome of a vector is defined to be . The following lemma allows us to determine the cosets easily.
Two vectors and belong to the same coset if and only if they have the same syndrome.
Proof. Two vectors and to belong to the same coset if and only if their difference belongs to the code ; that is, . This happens if and only if , which is equivalent to .
Decoding can be achieved by building a syndrome lookup table, which consists of the coset leaders and their corresponding syndromes. With a syndrome lookup table, we can decode with the following steps:
For a received vector , calculate its syndrome .
Next, find the coset leader with the same syndrome as . Call the coset leader .
Decode as .
Syndrome decoding requires significantly fewer steps than searching for the nearest codeword to a received vector. However, for large codes it is still too inefficient to be practical. In general, the problem of finding the nearest neighbor in a general linear code is hard; in fact, it is what is known as an NP-complete problem. However, for certain special types of codes, efficient decoding is possible. We treat some examples in the next few sections.
The vector space has a dot product, defined in the usual way:
For example, if , then
so we find the possibly surprising fact that the dot product of a nonzero vector with itself can sometimes be 0, in contrast to the situation with real numbers. Therefore, the dot product does not tell us the length of a vector. But it is still a useful concept.
If is a linear code, define the dual code
If is a linear code with generating matrix , then is a linear code with generating matrix . Moreover, is a parity check matrix for .
Proof. Since every element of is a linear combination of the rows of , a vector is in if and only if . This means that is the left null space of . Also, we see that is a parity check matrix for . Since has rank , so does . The left null space of therefore has dimension , so has dimension . Because is a parity check matrix for , and the rows of are in , we have . Taking the transpose of this relation, and recalling that transpose reverses order (), we find . This means that the rows of are in the left null space of ; therefore, in . Since has rank , the span of its rows has dimension , which is the same as the dimension of . It follows that the rows of span , so is a generating matrix for .
A code is called self-dual is . The Golay code of Section 24.6 is an important example of a self-dual code.
Let be the binary repetition code. Since for every , a vector is in if and only if . This means that is a parity check code: if and only if .
Let be the binary code with generating matrix
The proposition says that has generating matrix
This is with the rows switched, so the rows of and the rows of generate the same subspace. Therefore, , which says that is self-dual.
The Hamming codes are an important class of single error correcting codes that can easily encode and decode. They were originally used in controlling errors in long-distance telephone calls. Binary Hamming codes have the following parameters:
Code length:
Dimension:
Minimum distance:
The easiest way to describe a Hamming code is through its parity check matrix. For a binary Hamming code of length , first construct an matrix whose columns are all nonzero binary -tuples. For example, for a binary Hamming code we take , so and , and start with
In order to obtain a parity check matrix for a code in systematic form, we move the appropriate columns to the end so that the matrix ends with the identity matrix. The order of the other columns is irrelevant. The result is the parity check matrix for a Hamming code. In our example, we move the 4th, 2nd, and 1st columns to the end to obtain
which is the matrix from Exercise 3.
We can easily calculate a generator matrix from the parity check matrix . Since Hamming codes are single error correcting codes, the syndrome method for decoding can be simplified. In particular, the error vector is allowed to have weight at most 1, and therefore will be zero or will have all zeros except for a single in the th position.
The Hamming decoding algorithm, which corrects up to one bit error, is as follows:
Compute the syndrome for the received vector . If , then there are no errors. Return the received vector and exit.
Otherwise, determine the position of the column of that is the transpose of the syndrome.
Change the th bit in the received word, and output the resulting code.
As long as there is at most one bit error in the received vector, the result will be the codeword that was sent.
The binary Hamming code has parity check matrix
Assume the received vector is
The syndrome is calculated to be . Notice that is the transpose of the 11th column of , so we change the 11th bit of to get the decoded word as
Since the first 11 bits give the information, the original message was
Therefore, we have detected and corrected the error.
Two of the most famous binary codes are the Golay codes and . The [24, 12, 8] extended Golay code was used by the Voyager I and Voyager II spacecrafts during 1979–1981 to provide error correction for transmission back to Earth of color pictures of Jupiter and Saturn. The (nonextended) Golay code , which is a [23, 12, 7] code, is closely related to . We shall construct first, then modify it to obtain . There are many other ways to construct the Golay codes. See [MacWilliams-Sloane].
The generating matrix for is the matrix
All entries of are integers mod 2. The first 12 columns of are the identity matrix. The last 11 columns are obtained as follows. The squares mod 11 are 0, 1, 3, 4, 5, 9 (for example, and ). Take the vector , with a 1 in positions 0, 1, 3, 4, 5, 9. This gives the last 11 entries in the first row of . The last 11 elements of the other rows, except the last, are obtained by cyclically permuting the entries in this vector. (Note: The entries are integers mod 2, not mod 11. The squares mod 11 are used only to determine which positions receive a 1.) The 13th column and the 12th row are included because they can be; they increase and and help give the code some of its nice properties. The basic properties of are given in the following theorem.
is a self-dual [24, 12, 8] binary code. The weights of all vectors in are multiples of 4.
Proof. The rows in have length 24. Since the identity matrix is contained in , the 12 rows of are linearly independent. Therefore, has dimension 12, so it is a code for some . The main work will be to show that . Along the way, we’ll show that is self-dual and that the weights of its codewords are .
Of course, it would be possible to have a computer list all elements of and their weights. We would then verify the claims of the theorem. However, we prefer to give a more theoretical proof.
Let be the first row of and let be any of the other first 11 rows. An easy check shows that and have exactly four 1s in common, and each has four 1s that are matched with 0s in the other vector. In the sum , the four common 1s cancel mod 2, and the remaining four 1s from each row give a total of eight 1s in the sum. Therefore, has weight 8. Also, the dot product receives contributions only from the common 1s, so .
Now let and be any two distinct rows of , other than the last row. The first 12 entries and the last 11 entries of are cyclic permutations of the corresponding parts of and also of the corresponding parts of the first row. Since a permutation of the entries does not change the weights of vectors or the value of dot products, the preceding calculation of and applies to and . Therefore,
.
Any easy check shows that (1) and (2) also hold if or is the last row of , so we see that (1) and (2) hold for any two distinct rows of . Also, each row of has an even number of 1s, so (2) holds even when .
Now let and be arbitrary elements of . Then and are linear combinations of rows of , so is a linear combination of numbers of the form for various rows and of . Each of these dot products is 0 mod 2, so . This implies that . Since is a 12-dimensional subspace of 24-dimensional space, has dimension . Therefore, and have the same dimension, and one is contained in the other. Therefore, , which says that is self-dual.
Observe that the weight of each row of is a multiple of 4. The following lemma will be used to show that every element of has a weight that is a multiple of 4.
Let and be binary vectors of the same length. Then
where the notation means that the dot product is regarded as a usual integer, not mod 2 (for example, , rather than 1).
Proof. The nonzero entries of occur when exactly one of the vectors has an entry 1 and the other has a 0 as its corresponding entry. When both vectors have a 1, these numbers add to 0 mod 2 in the sum. Note that counts the total number of 1s in and and therefore includes these 1s that canceled each other. The contributions to are caused exactly by these 1s that are common to the two vectors. So there are entries in and the same number in that are included in , but do not contribute to . Putting everything together yields the equation in the lemma.
We now return to the proof of the theorem. Consider a vector in . It can be written as a sum , where are distinct rows of . We’ll prove that by induction on . Looking at , we see that the weights of all rows of are multiples of 4, so the case is true. Suppose, by induction, that all vectors that can be expressed as a sum of rows of have weight . In particular, has weight a multiple of 4. By the lemma,
But , as we proved. Therefore, . We have proved that whenever is a sum of rows. By induction, all sums of rows of have weight . This proves that all weights of are multiples of 4.
Finally, we prove that the minimum weight in is 8. This is true for the rows of , but we also must show it for sums of rows of . Since the weights of codewords are multiples of 4, we must show that there is no codeword of weight 4, since the weights must then be at least 8. In fact, 8 is then the minimum, because the first row of , for example, has weight 8.
We need the following lemma.
The rows of the matrix formed from the last 12 columns of are linearly independent mod 2. The rows of the matrix formed from the last 11 elements of the first 11 rows of are linearly dependent mod 2. The only linear dependence relation is that the sum of all 11 rows of is 0 mod 2.
Proof. Since is self-dual, the dot product of any two rows of is 0. This means that the matrix product . Since (that is, followed by the matrix ), this may be rewritten as
which implies that (we’re working mod 2, so the minus signs disappear). This means that is invertible, so the rows are linearly independent.
The sum of the rows of is 0 mod 2, so this is a dependence relation. Let be an 11-dimensional column vector. Then , which is just another way of saying that the sum of the rows is 0. Suppose is a nonzero 11-dimensional column vector such that . Extend and to 12-dimensional vectors by adjoining a 0 at the top of each column vector. Let be the bottom row of . Then
This equation follows from the fact that . Note that multiplying a matrix times a vector consists of taking the dot products of the rows of the matrix with the vector.
Since is invertible and , we have , so Since we are working mod 2, the dot product must equal 1. Therefore,
Since is invertible, we must have , so (we are working mod 2). Ignoring the top entries in and , we obtain . Therefore, the only nonzero vector in the null space of is . Since the vectors in the null space of a matrix give the linear dependencies among the rows of the matrix, we conclude that the only dependency among the rows of is that the sum of the rows is 0. This proves the lemma.
Suppose is a codeword in . If is, for example, the sum of the second, third, and seventh rows, then will have 1s in the second, third, and seventh positions, because the first 12 columns of form an identity matrix. In this way, we see that if is the sum of rows of , then . Suppose now that . Then is the sum of at most four rows of . Clearly, cannot be a single row of , since each row has weight at least 8. If is the sum of two rows, we proved that is 8. If is the sum of three rows of , then there are two possibilities.
(1) First, suppose that the last row of is not one of the rows in the sum. Then three 1s are used from the 13th column, so a 1 appears in the 13th position of . The 1s from the first 12 positions (one for each of the rows ) contribute three more 1s to . Since , we have accounted for all four 1s in . Therefore, the last 11 entries of are 0. By the preceding lemma, a sum of only three rows of the matrix cannot be 0. Therefore, this case is impossible.
(2) Second, suppose that the last row of appears in the sum for , say with the last row of . Then the last 11 entries of are formed from the sum of two rows of (from and ) plus the vector from . Recall that the weight of the sum of two distinct rows of is 8. There is a contribution of 2 to this weight from the first 13 columns. Therefore, looking at the last 11 columns, we see that the sum of two distinct rows of has weight 6. Adding a vector mod 2 to the vector changes all the 1s to 0s and all the 0s to 1s. Therefore, the weight of the last 11 entries of is 5. Since , this is impossible, so this case also cannot occur.
Finally, if is the sum of four rows of , then the first 12 entries of have four 1s. Therefore, the last 12 entries of are all 0. By the lemma, a sum of four rows of cannot be 0, so we have a contradiction. This completes the proof that there is no codeword of weight 4.
Since the weights are multiples of 4, the smallest possibility for the weight is 8. As we pointed out previously, there are codewords of weight 8, so we have proved that the minimum weight of is 8. Therefore, is a [24, 12, 8] code, as claimed. This completes the proof of the theorem.
The (nonextended) Golay code is obtained by deleting the last entry of each codeword in .
is a linear [23,12,7] code.
Proof. Clearly each codeword has length 23. Also, the set of vectors in is easily seen to be closed under addition (if are vectors of length 24, then the first 23 entries of are computed from the first 23 entries of and ) and forms a binary vector space. The generating matrix for is obtained by removing the last column of the matrix for . Since contains the identity matrix, the rows of are linearly independent, and hence span a 12-dimensional vector space. If is a codeword in , then can be obtained by removing the last entry of some element of . If , then , so . Since has one entry fewer than , we have . This completes the proof.
Suppose a message is encoded using and the received message contains at most three errors. In the following, we show a way to correct these errors.
Let be the generating matrix for . Write in the form
where is the identity matrix, consists of the last 12 columns of , and are column vectors. Note that are the standard basis elements for 12-dimensional space. Write
where are column vectors. This means that are the rows of .
Suppose the received message is , where is a codeword from and
is the error vector. We assume .
The algorithm is as follows. The justification is given below.
Let be the syndrome.
Compute the row vectors , , and .
If , then the nonzero entries of correspond to the nonzero entries of .
If , then there is a nonzero entry in the th position of exactly when the th entry of is nonzero.
If for some with , then and the nonzero entries of are in the positions of the other nonzero entries of the error vector .
If for some with , then . If there is a nonzero entry for this in position (there are at most two such ), then .
The sender starts with the message
The codeword is computed as
and sent to us. Suppose we receive the message as
A calculation shows that
and
Neither of these has weight at most 3, so we compute and . We find that
This means that there is an error in position 4 (corresponding to the choice ) and in positions 20 (= 12 + 8) and 22 (= 12 + 10) (corresponding to the nonzero entries in positions 8 and 10 of ). We therefore compute
Moreover, since is in systematic form, we recover the original message from the first 12 entries:
We now justify the algorithm and show that if , then at least one of the preceding cases occurs.
Since is self-dual, the dot product of a row of with any codeword is 0. This means that . In our case, we have , so
This last equality just expresses the fact that the vector times the matrix equals times the first row of , plus times the second row of , etc. Also,
since (proved in the preceding lemma). We have
Therefore,
If , then either or , since otherwise there would be too many nonzero entries in . We therefore consider the following four cases.
. Then
Therefore, and we can determine the errors as in step (4) of the algorithm.
. Then for exactly one with , so
Therefore,
The vector has at most two nonzero entries, so we are in step (6) of the algorithm.
The choice of is uniquely determined by . Suppose for some . Then
(see Exercise 6). However, we showed in the proof of the theorem about that the weight of the sum of any two distinct rows of has weight 8, from which it follows that the sum of any two distinct rows of has weight 6. Therefore, . This contradiction shows that cannot exist, so is unique.
. In this case,
We have , so we are in step (3) of the algorithm.
. In this case, for some with . Therefore,
and we obtain
There are at most two nonzero entries in , so we are in step (5) of the algorithm.
As in (2), the choice of is uniquely determined by .
In each of these cases, we obtain a vector, let’s call it , with at most three nonzero entries. To correct the errors, we add (or subtract; we are working mod 2) to the received vector to get . How do we know this is the vector that was sent? By the choice of , we have
so
Since is self-dual, is a parity check matrix for . Since , we conclude that is a codeword. We obtained by correcting at most three errors in . Since we assumed there were at most three errors, and since the minimum weight of is 8, this must be the correct decoding. So the algorithm actually corrects the errors, as claimed.
The preceding algorithm requires several steps. We need to compute the weights of 26 vectors. Why not just look at the various possibilities for three errors and see which correction yields a codeword? There are possibilities for the locations of at most three errors, so this could be done on a computer. However, the preceding decoding algorithm is faster.
Cyclic codes are a very important class of codes. In the next two sections, we’ll meet two of the most useful examples of these codes. In this section, we describe the general framework.
A code is called cyclic if
For example, if is in a cyclic code, then so is . Applying the definition two more times, we see that and are also codewords, so all cyclic permutations of the codeword are codewords. This might seem to be a strange condition for a code to satisfy. After all, it would seem to be rather irrelevant that, for a given codeword, all of its cyclic shifts are still codewords. The point is that cyclic codes have a lot of structure, which makes them easier to study. In the case of BCH codes (see Section 24.8), this structure yields an efficient decoding algorithm.
Let’s start with an example. Consider the binary matrix
The rows of generate a three-dimensional subspace of seven-dimensional binary space. In fact, in this case, the cyclic shifts of the first row give all the nonzero codewords:
Clearly the minimum weight is 4, so we have a cyclic [7, 3, 4] code.
We now show an algebraic way to obtain this code. Let denote polynomials in with coefficients mod 2, and let denote these polynomials mod . For a detailed description of what this means, see Section 3.11. For the present, it suffices to say that working mod means we are working with polynomials of degree less than 7. Whenever we have a polynomial of degree 7 or higher, we divide by and take the remainder.
Let . Consider all products
with of degree . Write the coefficients of the product as a vector . For example, yields , which is the top row of . Similarly, yields the second row of and yields the third row of . Also, yields , which is the sum of the first and third rows of . In this way, we obtain all the codewords of our code.
We obtained this code by considering products with . We could also work with of arbitrary degree and obtain the same code, as long as we work mod . Note that . Divide into :
with . Then
Therefore, gives the same codeword as , so we may restrict to working with polynomials of degree at most two, as claimed.
Why is the code cyclic? Start with the vector for . The vectors for and are cyclic shifts of the one for by one place and by two places, respectively. What happens if we multiply by ? We obtain a polynomial of degree 7, so we divide by and take the remainder:
The remainder yields the vector . This is the cyclic shift by three places of the vector for .
A similar calculation for shows that the vector for yields the shift by places of the vector for . In fact, this is a general phenomenon. If is a polynomial, then
The remainder is , which corresponds to the vector . Therefore, multiplying by and reducing mod corresponds to a cyclic shift by one place of the corresponding vector. Repeating this times shows that multiplying by corresponds to shifting by places.
We now describe the general situation. Let be a finite field. For a treatment of finite fields, see Section 3.11. For the present purposes, you may think of as being the integers mod , where is a prime number, since this is an example of a finite field. For example, you could take , the integers mod 2. Let denote polynomials in with coefficients in . Choose a positive integer . We’ll work in , which denotes the elements of mod . This means we’re working with polynomials of degree less than . Whenever we encounter a polynomial of degree , we divide by and take the remainder. Let be a polynomial in . Consider the set of polynomials
where runs through all polynomials in (we only need to consider with degree less than , since higher-degree polynomials can be reduced mod ). Write
The coefficients give us the -dimensional vector . The set of all such coefficients forms a subspace of -dimensional space . Then is a code.
If is any such polynomial, and is another polynomial, then is the multiple of by the polynomial . Therefore, it yields an element of the code . In particular, multiplication by and reducing mod corresponds to a codeword that is a cyclic shift of the original codeword, as above. Therefore, is cyclic.
The following theorem gives the general description of cyclic codes.
Let be a cyclic code of length over a finite field . To each codeword , associate the polynomial in . Among all the nonzero polynomials obtained from in this way, let have the smallest degree. By dividing by its highest coefficient, we may assume that the highest nonzero coefficient of is 1. The polynomial is called the generating polynomial for . Then
is uniquely determined by .
is a divisor of .
is exactly the set of coefficients of the polynomials of the form with .
Write . Then corresponds to an element of if and only if .
Proof.
If is another such polynomial, then and have the same degree and have highest nonzero coefficient equal to 1. Therefore, has lower degree and still corresponds to a codeword, since is closed under subtraction. Since had the smallest degree among nonzero polynomials corresponding to codewords, must be 0, which means that . Therefore, is unique.
Divide into :
for some polynomials and , with . This means that
As explained previously, multiplying by powers of corresponds to cyclic shifts of the codeword associated to . Since is assumed to be cyclic, the polynomials for therefore correspond to codewords; call them . Write . Then corresponds to the linear combination
Since each is in and each is in , we have a linear combination of elements of . But is a vector subspace of -dimensional space . Therefore, this linear combination is in . This means that , which is , corresponds to a codeword. But , which is the minimal degree of a polynomial corresponding to a nonzero codeword in . Therefore, . Consequently , so is a divisor of .
Let correspond to an element of . Divide into :
with . As before, corresponds to a codeword. Also, corresponds to a codeword, by assumption. Therefore, corresponds to the difference of these codewords, which is a codeword. But this polynomial is just . As before, this polynomial has degree less than , so . Therefore, . Since , we must have . Conversely, as explained in the proof of (2), since is cyclic, any such polynomial of the form yields a codeword. Therefore, these polynomials yield exactly the elements of .
Write , which can be done by (2). Suppose corresponds to an element of . Then , by (3), so
Conversely, suppose is a polynomial such that . Write , for some polynomial . Dividing by yields , which is a multiple of , and hence corresponds to a codeword. This completes the proof of the theorem.
Let be as in the theorem. By part (3) of the theorem, every element of corresponds to a polynomial of the form , with . This means that each such is a linear combination of the monomials . It follows that the codewords of are linear combinations of the codewords corresponding to the polynomials
But these are the vectors
Therefore, a generating matrix for can be given by
We can use part (4) of the theorem to obtain a parity check matrix for . Let be as in the theorem (where ). We’ll prove that the matrix
is a parity check matrix for . Note that the order of the coefficients of is reversed. Recall that is a parity check matrix for means that if and only if .
is a parity check matrix for .
Proof. First observe that since has 1 as its highest nonzero coefficient, and since , the highest nonzero coefficient of must also be 1. Therefore, is in row echelon form and consequently its rows are linearly independent. Since has rows, it has rank . The right null space of therefore has dimension .
Let . We know from part (4) that if and only if .
Choose with and look at the coefficient of in the product . It equals
There is a technical point to mention: Since we are looking at , we need to worry about a contribution from the term (since , the monomial reduces to ). However, the highest-degree term in the product before reducing mod is . Since , we have . Therefore, there is no term with to worry about.
When we multiply times , we obtain a vector whose first entry is
More generally, the th entry (where ) is
This is the coefficient of in the product .
If is in , then , so all these coefficients are 0. Therefore, times is the 0 vector, so the transposes of the vectors of are contained in the right null space of . Since both and the null space have dimension , we must have equality. This proves that if and only if , which means that is a parity check matrix for .
In the example at the beginning of this section, we had and . We have , so . The parity check matrix is
The parity check matrix gives a way of detecting errors, but correcting errors for general cyclic codes is generally quite difficult. In the next section, we describe a class of cyclic codes for which a good decoding algorithm exists.
BCH codes are a class of cyclic codes. They were discovered around 1959 by R. C. Bose and D. K. Ray-Chaudhuri and independently by A. Hocquenghem. One reason they are important is that there exist good decoding algorithms that correct multiple errors (see, for example, [Gallager] or [Wicker]). BCH codes are used in satellites. The special BCH codes called Reed-Solomon codes (see Section 24.9) have numerous applications.
Before describing BCH codes, we need a fact about finite fields. Let be a finite field with elements. From Section 3.11, we know that is a power of a prime number . Let be a positive integer not divisible by . Then it can be proved that there exists a finite field containing such that contains a primitive th root of unity . This means that , but for .
For example, if , the integers mod 2, and , we may take . The element in the description of in Section 3.11 is a primitive third root of unity. More generally, a primitive th root of unity exists in a finite field with elements if and only if .
The reason we need the auxiliary field is that several of the calculations we perform need to be carried out in this larger field. In the following, when we use an th root of unity , we’ll implicitly assume that we’re calculating in some field that contains . The results of the calculations, however, will give results about codes over the smaller field .
The following result, often called the BCH bound, gives an estimate for the minimum weight of a cyclic code.
Let be a cyclic code over a finite field , where has elements. Assume Let be the generating polynomial for . Let be a primitive th root of unity and suppose that for some integers and ,
Then .
Proof. Suppose has weight with . We want to obtain a contradiction. Let . We know that is a multiple of , so
Let be the nonzero coefficients of , so
The fact that for (note that ) can be rewritten as
We claim that the determinant of the matrix is nonzero. We need the following evaluation of the Vandermonde determinant. The proof can be found in most books on linear algebra.
(The product is over all pairs of integers with .) In particular, if are pairwise distinct, the determinant is nonzero.
In our matrix, we can factor from the first column, from the second column, etc., to obtain
Since are pairwise distinct, the determinant is nonzero. Why are these numbers distinct? Suppose . We may assume . We have . Therefore, . Note that . Since is a primitive th root of unity, for . Therefore, , so . This means that the numbers are pairwise distinct, as claimed.
Since the determinant is nonzero, the matrix is nonsingular. This implies that , contradicting the fact that these were the nonzero ’s. Therefore, all nonzero codewords have weight at least . This completes the proof of the theorem.
Let = the integers mod 2, and let . Let . Then
which is a binary repetition code. Let be a primitive third root of unity, as in the description of in Section 3.11. Then . In the theorem, we can therefore take and . We find that the minimal weight of is at least 3. In this case, the bound is sharp, since the minimal weight of is exactly 3.
Let be any finite field and let be any positive integer. Let . Then , so we may take and . We conclude that the minimum weight of the code generated by is at least 2 (actually, the theorem assumes that but this assumption is not needed for this special case where ). We have seen this code before. If is a vector, and is the associated polynomial, then is a multiple of exactly when . This means that . So a vector is a codeword if and only if the sum of its entries is 0. When , this is the parity check code, and for other finite fields it is a generalization of the parity check code. The fact that its minimal weight is 2 is easy to see directly: If a codeword has a nonzero entry, then it must contain another nonzero entry to cancel it and make the sum of the entries be 0. Therefore, each nonzero codeword has at least two nonzero entries, and hence has weight at least 2. The vector is a codeword and has weight 2, so the minimal weight is exactly 2.
Let’s return to the example of a binary cyclic code of length 7 from Section 24.7. We have , and . We can factor . Let be a root of . Then is a primitive seventh root of unity (see Exercise 18), and we are working in . Since , we have and . Therefore, . Squaring yields . Therefore, . This means that , so
In the theorem, we can take and . Therefore, the minimal weight in the code is at least 4 (in fact, it is exactly 4).
To define the BCH codes, we need some more notation. We are going to construct codes of length over a finite field . Factor into irreducible factors over :
where each is a polynomial with coefficients in , and each cannot be factored into lower-degree polynomials with coefficients in . We may assume that the highest nonzero coefficient of each is 1. Let be a primitive th root of unity. Then are roots of . This means that
Therefore, each is a product of some of these factors , and each is a root of exactly one of the polynomials . For each , let be the polynomial such that . This gives us polynomials . Of course, usually these polynomials are not all distinct, since a polynomial that has two different powers as roots will serve as both and (see the examples given later in this section).
A BCH code of designed distance is a code with generating polynomial
for some integer .
A BCH code of designed distance has minimum weight greater than or equal to .
Proof. Since divides for , and , we have
The BCH bound (with and ) implies that the code has minimum weight at least .
Let , and let . Then
Let be a root of . Then is a primitive 7th root of unity, as in the previous example. Moreover, in that example, we showed that is also a root of . In fact, we actually showed that the square of a root of is also a root, so we have that is also a root of . (We could square this again, but , so we are back to where we started.) Therefore, are the roots of , so
The remaining powers of must be roots of , so
Therefore,
If we take and , then
We obtain the cyclic code discussed in Section 24.7. The theorem says that the minimum weight is at least 3. In this case, we can do a little better. If we take and , then we have a generating polynomial with
This is because , so the least common multiple doesn’t change when is included. The theorem now tells us that the minimum weight of the code is at least 4. As we have seen before, the minimum weight is exactly 4.
Let’s continue with the previous example, but take and . Then
We obtain the repetition code with only two codewords:
The theorem says that the minimum distance is at least 7. In fact it is exactly 7.
Let = the integers mod 5. Let . Then
(this is an equality, or congruence if you prefer, in ). Let . We have , but for . Therefore, 2 is a primitive 4th root of unity in . We have (these are just congruences mod 5). Therefore,
In the theorem, let . Then
We obtain a cyclic code over with generating matrix
The theorem says that the minimum weight is at least 3. Since the first row of the matrix is a codeword of weight 3, the minimum weight is exactly 3. This code is an example of a Reed-Solomon code, which will be discussed in the next section.
One of the reason BCH codes are useful is that there are good decoding algorithms. One of the best known is due to Berlekamp and Massey (see [Gallager] or [Wicker]). In the following, we won’t give the algorithm, but, in order to give the spirit of some of the ideas that are involved, we show a way to correct one error in a BCH code with designed distance .
Let be a BCH code of designed distance . Then is a cyclic code, say of length , with generating polynomial . There is a primitive th root of unity such that
for some integer .
Let
If is a codeword, then the polynomial is a multiple of , so
This may be rewritten in terms of :
is not necessarily a parity check matrix for , since there might be noncodewords that are also in the null space of . However, as we shall see, can correct an error.
Suppose the vector is received, where is a codeword and is an error vector. We assume that at most one entry of is nonzero.
Here is the algorithm for correcting one error.
Write .
If , there is no error (or there is more than one error), so we’re done.
If , compute . This will be a power of . The error is in the th position. If we are working over the finite field , we are done, since then . But for other finite fields, there are several choices for the value of .
Compute . This is the th entry of the error vector . The other entries of are 0.
Subtract the error vector from the received vector to obtain the correct codeword .
Let’s look at the BCH code over of length 7 and designed distance 7 considered previously. It is the binary repetition code of length 7 and has two codewords: . The algorithm corrects one error. Suppose the received vector is . As before, let be a root of . Then is a primitive 7th root of unity.
Before proceeding, we need to deduce a few facts about computing with powers of . We have . Multiplying this relation by powers of yields
Also, the fact that is useful.
We now can compute
The sum in the first entry, for example, can be evaluated as follows:
Therefore, and . We need to calculate . Since , we have
Therefore, , so the error is in position . The fifth entry of the error vector is , so the error vector is . The corrected message is
Here is why the algorithm works. Since , we have
If with , then the definition of gives
Therefore, . Also, , as claimed.
The Reed-Solomon codes, constructed in 1960, are an example of BCH codes. Because they work well for certain types of errors, they have been used in spacecraft communications and in compact discs.
Let be a finite field with elements and let . A basic fact from the theory of finite fields is that contains a primitive th root of unity . Choose with and let
This is a polynomial with coefficients in . It generates a BCH code over of length , called a Reed-Solomon code.
Since , the BCH bound implies that the minimum distance for is at least . Since is a polynomial of degree , it has at most nonzero coefficients. Therefore, the codeword corresponding to the coefficients of is a codeword of weight at most . It follows that the minimum weight for is exactly . The dimension of is . Therefore, a Reed-Solomon code is a cyclic code.
The codewords in correspond to the polynomials
There are such polynomials since there are choices for each of the coefficients of , and thus there are codewords in . Therefore, a Reed-Solomon code is a MDS code, namely, one that makes the Singleton bound (Section 24.3) an equality.
Let , the integers mod 7. Then and . A primitive sixth root of unity in is the same as a primitive root mod 7 (see Section 3.7). We may take . Choose . Then
The code has generating matrix
There are codewords in the code, obtained by taking all linear combinations mod 7 of the three rows of . The minimum weight of the code is 4.
Let , which was introduced in Section 3.11. Then has 4 elements, , and . Choose , so
The matrix
is a generating matrix for the code. The code contains all 16 linear combinations of the two rows of , for example,
The minimum weight of the code is 2.
In many applications, errors are not randomly distributed. Instead, they occur in bursts. For example, in a CD, a scratch introduces errors in many adjacent bits. A burst of solar energy could have a similar effect on communications from a spacecraft. Reed-Solomon codes are useful in such situations.
For example, suppose we take . The elements of are represented as bytes of eight bits each, as in Section 3.11. We have . Let . The codewords are then vectors consisting of 255 bytes. There are 222 information bytes and 33 check bytes. These codewords are sent as strings of binary bits. Disturbances in the transmission will corrupt some of these bits. However, in the case of bursts, these bits will often be in a small region of the transmitted string. If, for example, the corrupted bits all lie within a string of 121 () consecutive bits, there can be errors in at most 16 bytes. Therefore, these errors can be corrected (because ). On the other hand, if there were 121 bit errors randomly distributed through the string of 2040 bits, numerous bytes would be corrupted, and correct decoding would not be possible. Therefore, the choice of code depends on the type of errors that are expected.
In this book, we have mostly described cryptographic systems that are based on number theoretic principles. There are many other cryptosystems that are based on other complex problems. Here we present one based on the difficulty of finding the nearest codeword for a linear binary code.
The idea is simple. Suppose you have a binary string of length 1024 that has 50 errors. There are possible locations for these errors, so an exhaustive search that tries all possibilities is infeasible. Suppose, however, that you have an efficient decoding algorithm that is unknown to anyone else. Then only you can correct these errors and find the corrected string. McEliece showed how to use this to obtain a public key cryptosystem.
Bob chooses to be the generating matrix for an linear error correcting code with . He chooses to be a matrix that is invertible mod 2 and lets be an permutation matrix, which means that has exactly one 1 in every row and in every column, with all the other entries being 0. Define
The matrix is the public key for the cryptosystem. Bob keeps secret.
In order for Alice to send Bob a message , she generates a random binary string of length that has weight . She forms the ciphertext by computing
Bob decrypts as follows:
Calculate . (Since is a permutation matrix, is still a binary string of weight . We have .)
Apply the error decoder for the code to to correct the “error” and obtain the codeword closest to .
Compute such that (in the examples we have considered, is simply the first bits of ).
Compute .
The security of the system lies in the difficulty of decoding to obtain . There is a little security built into the system by ; however, once a decoding algorithm is known for the code generated by , a chosen plaintext attack allows one to solve for the matrix (as in the Hill cipher).
To make decoding difficult, should be chosen to be large. McEliece suggested using a Goppa code. The Goppa codes have parameters of the form . For example, taking and yields the code just mentioned. It can correct up to 50 errors. For given values of and , there are in fact many inequivalent Goppa codes with these parameters. We will not discuss these codes here except to mention that they have an efficient decoding algorithm and therefore can be used to correct errors quickly.
Consider the matrix
which is the generator matrix for the Hamming code. Suppose Alice wishes to send a message
to Bob. In order to do so, Bob must create an invertible matrix and a random permutation matrix that he will keep secret. If Bob chooses
and
Using these, Bob generates the public encryption matrix
In order to encrypt, Alice generates her own random error vector and calculates the ciphertext . In the case of a Hamming code the error vector has weight . Suppose Alice chooses
Then
Bob decrypts by first calculating
Calculating the syndrome of by applying the parity check matrix and changing the corresponding bit yields
Bob next forms a vector such that , which can be done by extracting the first four components of , that is,
Bob decrypts by calculating
which is the original plaintext message.
The McEliece system seems to be reasonably secure. For a discussion of its security, see [Chabaud]. A disadvantage of the system compared to RSA, for example, is that the size of the public key is rather large.
The field of error correcting codes is a vast subject that is explored by both the mathematical community and the engineering community. In this chapter we have only touched upon a select handful of the concepts of this field. There are many other areas of error correcting codes that we have not discussed.
Perhaps most notable of these is the study of convolutional codes. In this chapter we have entirely focused on block codes, where typically the data are segmented into blocks of a fixed length and mapped into codewords of a fixed length . However, in many applications, the data are produced in a continuous fashion, and it is better to map the stream of data into a stream of coded symbols. For example, such codes have the advantage of not requiring the delay needed to observe an entire block of symbols before encoding or decoding. A good analogy is that block codes are the coding theory analogue of block ciphers, while convolutional codes are the analogue of stream ciphers.
Another topic that is very important in the study of error correcting codes is that of efficient decoding. In the case of linear codes, we presented syndrome decoding, which is more efficient than performing a search for the nearest codeword. However, for large linear codes, syndrome decoding is still too inefficient to be practical. When BCH and Reed-Solomon codes were introduced, the decoding schemes that were originally presented were impractical for decoding more than a few errors. Later, Berlekamp and Massey provided an efficient approach to decoding BCH and Reed-Solomon codes. There is still a lot of research being done on this topic. We direct the reader to the books [Lin-Costello], [Wicker], [Gallager], and [Berlekamp] for further discussion on the subject of decoding.
We have also focused entirely on bit or symbol errors. However, in modern computer networks, the types of errors that occur are not simply bit or symbol errors but also the complete loss of segments of data. For example, on the Internet, data are transferred over the network in chunks called packets. Due to congestion at various locations on the network, such as routers and switches, packets might be dropped and never reach their intended recipient. In this case, the recipient might notify the sender, requesting a packet to be resent. Protocols such as the Transmission Control Protocol (TCP) provide mechanisms for retransmitting lost packets.
When performing cryptography, it is critical to use a combination of many different types of error control techniques to assure reliable delivery of encrypted messages; otherwise, the receiver might not be able to decrypt the messages that were sent.
Finally, we mention that coding theory has strong connections with various problems in mathematics such as finding dense packings of high-dimensional spheres. For more on this, see [Thompson].
Two codewords were sent using the Hamming code and were received as 0100111 and 0101010. Each one contains at most one error. Correct the errors. Also, determine the 4-bit messages that were multiplied by the matrix to obtain the codewords.
An ISBN number is incorrectly written as 0-13-116093-8. Show that this is not a correct ISBN number. Find two different valid ISBN numbers such that an error in one digit would give this number. This shows that ISBN cannot correct errors.
The following is a parity check matrix for a binary code :
Find and .
Find the generator matrix for .
List the codewords in .
What is the code rate for ?
Let be a binary repetition code.
Find a parity check matrix for .
List the cosets and coset leaders for .
Find the syndrome for each coset.
Suppose you receive the message . Use the syndrome decoding method to decode it.
Let be the binary code .
Show that is not linear.
What is ? (Since is not linear, this cannot be found by calculating the minimum weight.)
Show that satisfies the Singleton bound with equality.
Show that the weight function (on ) satisfies the triangle inequality: .
Show that , where is the function defined in Section 24.3.
Let be the repetition code of length . Show that is the parity check code of length . (This is true for arbitrary .)
Let be a linear code and let and be cosets of . Show that if and only if . (Hint: To show , it suffices to show that for every , and that for every . To show the opposite implication, use the fact that .)
Show that if is a self-dual code, then must be even.
Show that is the generating polynomial for the repetition code. (This is true for arbitrary .)
Let be a polynomial with coefficients in .
Show that is a factor of in .
The polynomial is the generating polynomial for a cyclic code code . Find the generating matrix for .
Find a parity check matrix for .
Show that , where
Show that the rows of generate .
Show that a permutation of the columns of gives the generating matrix for the Hamming code, and therefore these two codes are equivalent.
Let be the cyclic binary code of length with generating polynomial . Which of the following polynomials correspond to elements of ?
Let be the generating polynomial for a cyclic code of length , and let . Write . Show that the dual code is cyclic with generating polynomial . (The factor is included to make the highest nonzero coefficient be 1.)
Let be a binary repetition code of odd length (that is, contains two vectors, one with all 0s and one with all 1s). Show that is perfect. (Hint: Show that every vector lies in exactly one of the two spheres of radius .)
Use (a) to show that if is odd then . (This can also be proved by applying the binomial theorem to , and then observing that we’re using half of the terms.)
Let and let denote the number of points in a Hamming sphere of radius . The proof of the Gilbert-Varshamov bound constructs an code with . However, this code is probably not linear. This exercise will construct a linear code, where is the smallest integer satisfying .
Show that there exists an code .
Suppose and that we have constructed an code in (where is the finite field with elements). Show that there is a vector with for all .
Let be the subspace spanned by and . Show that has dimension and that every element of can be written in the form with and .
Let , with , be an element of , as in (c). Show that .
Show that is an code. Continuing by induction, we obtain the desired code .
Here is a technical point. We have actually constructed an code with . Show that by possibly modifying in step (b), we may arrange that for some , so we obtain an code.
Show that the Golay code is perfect.
Let be a root of the polynomial .
Using the fact that divides , show that .
Show that .
Suppose that with . Then , so there exist integers with . Use this to show that , which is a contradiction. This shows that is a primitive seventh root of unity.
Let be the binary code of length 7 generated by the polynomial . As in Section 24.8, , where is a root of . Suppose the message is received. It has one error. Use the procedure from Section 24.8 to correct the error.
Let be a cyclic code of length with generating polynomial . Assume and (as in the theorem on p. 472).
Show that .
Write . Let be a primitive th root of unity. Show that at least one of is a root of . (You may use the fact that cannot have more than roots.)
Show that .
Three codewords from the Golay code are sent and you receive the vectors
Correct the errors. (The Golay matrix is stored as golay and the matrix is stored in the downloadable computer files (bit.ly/2JbcS6p) as golaybt.)
An 11-bit message is multiplied by the generating matrix for the Hamming [15, 11] code and the resulting codeword is sent. The vector
is received. Assuming there is at most one error, correct it and determine the original 11-bit message. (The parity check matrix for the Hamming [15, 11] code is stored in the downloadable computer files (bit.ly/2JbcS6p) as hammingpc.)
Quantum computing is a new area of research that has only recently started to blossom. Quantum computing and quantum cryptography were born out of the study of how quantum mechanical principles might be used in performing computations. The Nobel Laureate Richard Feynman observed in 1982 that certain quantum mechanical phenomena could not be simulated efficiently on a classical computer. He suggested that the situation could perhaps be reversed by using quantum mechanics to do computations that are impossible on classical computers. Feynman didn’t present any examples of such devices, and only recently has there been progress in constructing even small versions.
In 1994 the field of quantum computing had a significant breakthrough when Peter Shor of AT&T Research Labs introduced a quantum algorithm that can factor integers in (probabilistic) polynomial time (if a suitable quantum computer is ever built). This was a dramatic breakthrough as it presented one of the first examples of a scenario in which quantum techniques might significantly outperform classical computing techniques.
In this chapter we introduce a couple of examples from the area of quantum computing and quantum cryptography. By no means is this chapter a thorough treatment of this young field, for even as we write this chapter significant breakthroughs are being made at NIST and other places, and the field likely will continue to advance rapidly.
There are many books and expository articles being written on quantum computing. One readable account is [Rieffel-Polak].
Quantum mechanics is a difficult subject to explain to nonphysicists since it deals with concepts where our everyday experiences aren’t applicable. In particular, the scale at which quantum mechanical phenomena take place is on the atomic level, which is something that can’t be observed without special equipment. There are a few examples, however, that are accessible to us, and we now present one such example and use it to develop the mathematical formulation needed to describe some quantum computing protocols.
Since quantum mechanics is a particle-level physics, we need particles that we are able to observe. Photons are the particles that make up light and are therefore observable (similar demonstrations using other particles, such as electrons, can be performed but require more sophisticated equipment).
In order to understand this experiment better, we recommend that you try it at home. Start with a light source and three Polaroid® filters from a camera supply store or three lenses from Polaroid sunglasses.
Label the three filters , , and . Rotate them so that they have the following polarizations: horizontal, , and vertically, respectively (we will explain polarization in more detail after the experiment). Shine the light at the wall and insert filter between the light source and the wall as in Figure 25.1. The photons coming out of the filter will have horizontal polarization. Now insert filter as in Figure 25.2. Since filter has vertical polarization, it filters out all of the horizontally polarized photons from filter . Notice that no light arrives at the wall after this step, the two filters have removed all of the light components. Now for the final (and most bizarre) step, insert filter in between filter and . You should observe that there is now light arriving at the wall, as depicted in Figure 25.3. This is puzzling, since filter and were enough to remove all of the light, yet the addition of a third filter allows for light to reach the wall.
In order to explain this demonstration, we need to discuss the concept of polarization of light.
Light is an example of an electromagnetic wave, meaning that it consists of an electric field that travels orthogonally to a corresponding magnetic field. In order to visualize this, consider the light traveling along the -axis. Now imagine, for example, that the electric field is a wavelike function that lies in the -plane. Then the corresponding magnetic field would be a wavelike function in the -plane. For such a scenario, the light is referred to as vertically polarized. In general, polarization refers to the direction in which the electric field lies. There is no constraint on this direction.
We will represent a photon’s polarization by a unit vector in the two-dimensional complex vector space (however, for our present purposes, real numbers suffice). This vector space has a dot product given by , where and denote the complex conjugates of and . The square of the length of a vector is then . Choose a basis, which we shall denote and , for this vector space. We are choosing to use the ket (the second half of “bracket”) notation from physics to represent vectors. We can think of as being the vertical direction and as being horizontal. Therefore, an arbitrary polarization may be represented as , where and are complex numbers. Since we are working with unit vectors, the following property holds: . We could just have well chosen a different orthogonal basis, for example, one corresponding to a rotation: and .
The Polaroid filters perform a measurement of the polarity of the photon. There are two possible outcomes: Either the photon is aligned with the filter, or it is perpendicular to the direction of the filter. If the vector is measured by a vertical filter, then the probability that the photon has vertical polarity after passing through the filter is . The probability that it is measured as having horizontal polarity, and therefore not pass through, is .
Similarly, suppose we measure a vertically aligned photon with respect to a filter. Since
the probability that the photon passes through the filter (which means that it is measured as being aligned at ) is . Similarly, the probability that it doesn’t pass through the filter (which means that it is measured at ) is also .
One of the basic principles of quantum mechanics is that such a measurement forces the photon into a definite state. After being measured, the state of the photon will be changed to the result of the measurement. Therefore, if we measured the state of as , then, from that moment on, the photon will have the state . If we then measure this photon with a filter, we will always observe that the photon is in the state; however, if we measure with a filter, we will never observe that the photon is in the state.
Let’s now explain the interpretation of the experiment. The original light was emitted with random polarization, so only half of the photons being emitted will pass through the filter, and these photons will have their state changed to . The remaining half will be absorbed or reflected and will be changed to . When we place the vertical filter after the horizontal filter, the photons that hit it, which are in state , will be stopped.
When we insert filter in the middle, it corresponds to measuring with respect to , and hence those photons that had polarity will come out having polarity with probability . Therefore, there has been a reduction in the amount of photons passing through up to filter . Now the photons pass through the filter with probability also, and so the total intensity of light arriving at the wall is th the original intensity.
Now that we have set up some of the ideas behind quantum mechanics, we can use them to describe a technique for distributing bits through a quantum channel. These bits can be used to establish a key that can be used for communicating across a classical channel, or any other shared secret.
We begin by describing a quantum bit. Start with a two-dimensional complex vector space. Choose a pair of orthogonal vectors of length 1; call them and . For example, these two vectors could be either of the two pairs of orthogonal vectors used in the previous section. A quantum bit, also known as a qubit, is a unit vector in this vector space. For the purposes of the present discussion, we can think of a qubit as a polarized photon. We have chosen and as notation to conveniently represent the and bits, respectively. The other qubits are linear combinations of these two bits.
Since a qubit is a unit vector, it can be represented as , where and are complex numbers such that . Just as in the case for photons from the preceding section, we can measure this qubit with respect to the basis . The probability that we observe it in the state is .
Let us now examine how Alice and Bob can communicate with each other in order to establish a message. They will need two things: a quantum channel and a classical channel. A quantum channel is one through which they can exchange polarized photons that are isolated from interactions with the environment (that is, the environment doesn’t alter the photons). The classical channel will be used to send ordinary messages to each other. We assume that the evil observer Eve can observe what is being sent on the classical channel and that she can observe and resend photons on the quantum channel.
Alice starts the establishment of a message by sending a sequence of bits to Bob. They are encoded using a randomly chosen basis for each bit as follows. There are two bases: and . If Alice chooses , then she encodes as and 1 as , while if she chooses then she encodes and using the two elements of .
Each time Alice sends a photon, Bob randomly chooses to measure with respect to either basis or . Therefore, for each photon, he obtains an element of that choice of basis as the result of his measurement. Bob records the measurements he has made and keeps them secret. He then tells Alice the basis with which he measured each photon. Alice responds to Bob by telling him which bases were the correct bases for the polarity of the photons that she sent. They keep the bits that used the same bases and discard the other bits. Since two bases were used, Alice and Bob will agree on roughly half of the amount of bits that Alice sent. They can then use these bits as the key for a conventional cryptographic system.
Suppose Alice wants to send the bits . She randomly chooses the bases . Therefore, she sends the qubits (photons)
to Bob. He chooses the bases . He measures the qubits that Alice sent and also tells Alice which bases he used. Alice tells him that the second, fourth, fifth, seventh, and eighth match her choices. These yielded measurements
for Bob, and they correspond to the bits . Therefore, both Alice and Bob have the same string . They use as a key for future communication (for example, if they obtained a longer string, they could use the first 128 characters for an AES key).
The security behind quantum key distribution is based upon the laws of quantum mechanics and the fundamental principle that following a measurement of a particle, that particle’s state will be altered. Since an eavesdropper Eve must perform measurements in order to observe the photon transmissions between Alice and Bob, Eve will introduce errors in the data that Alice and Bob agreed upon.
Let’s see how this happens. Suppose Eve measures the states of the photons transmitted by Alice and allows these measured photons to proceed onto Bob. Since these photons were measured by Eve, they will have the state that Eve observed. Eve will use the wrong basis half of the time when performing the measurement. When Bob performs his measurements, if he uses the correct basis there will be a chance that he will have measured the wrong value.
Let’s examine this last statement in more detail. Suppose that Alice sends a photon corresponding to and that Bob uses the same basis as Alice. If Eve uses , then the photon is passed through correctly and then Bob measures the photon correctly. However, if Eve used , then she will measure and equally likely. The photons that pass to Bob will have one of these orientations and he will therefore half the time measure them correctly as and half the time incorrectly. Combining the two possible choices of basis that Eve has causes Bob to have a chance of measuring the incorrect value.
Thus, any eavesdropping introduces a higher error rate in the communication between Alice and Bob. If Alice and Bob test their data for discrepancies over the conventional channel (for example, they could send parity bits), they will detect any eavesdropping.
Actual implementations of this technique have been used to establish keys over distances of more than km using conventional fiber optical cables.
Quantum computers are not yet a reality. The current versions can only handle a few qubits. But, if the great technical problems can be overcome and large quantum computers are built, the effect on cryptography will be enormous. In this section we give a brief glimpse at how a quantum computer could factor large integers, using an algorithm developed by Peter Shor. We avoid discussing quantum mechanics and ask the reader to believe that a quantum computer should be able to do all the operations we describe, and do them quickly. For more details, see, for example, [Ekert-Josza] or [Rieffel-Polak].
What is a quantum computer and what does it do? First, let’s look at what a classical computer does. It takes a binary input, for example, 100010, and gives a binary output, perhaps 0101. If it has several inputs, it has to work on them individually. A quantum computer takes as input a certain number of qubits and outputs some qubits. The main difference is that the input and output qubits can be linear combinations of certain basic states. The quantum computer operates on all basic states in this linear combination simultaneously. In effect, a quantum computer is a massively parallel machine.
For example, think of the basic state as representing three particles, the first in orientation 1 and the last two in orientation 0 (with respect to some basis that will implicitly be fixed throughout the discussion). The quantum computer can take and produce some output. However, it can also take as input a normalized (that is, of length 1) linear combination of basic quantum states such as
and produce an output just as quickly as it did when working with a basic state. After all, the computer could not know whether a quantum state is one of the basic states, or a linear combination of them, without making a measurement. But such a measurement would alter the input. It is this ability to work with a linear combination of states simultaneously that makes a quantum computer potentially very powerful.
Suppose we have a function that can be evaluated for an input by a classical computer. The classical computer asks for an input and produces an output. A quantum computer, on the other hand, can accept as input a sum
( is a normalization factor) of all possible input states and produce the output
where is a longer sequence of qubits, representing both and the value of . (Technical point: It might be notationally better to input in order to have some particles to change to . For simplicity, we will not do this.) So we can obtain a list of all the values of . This looks great, but there is a problem. If you make a measurement, you force the quantum state into the result of the measurement. You get for some randomly chosen , and the other states in the output are destroyed. So, if you are going to look at the list of values of , you’d better do it carefully, since you get only one chance. In particular, you probably want to apply some transformation to the output in order to put it into a more desirable form. The skill in programming a quantum computer is in designing the computation so that the outputs you want to examine appear with much higher probability than the others. This is what is done in Shor’s factorization algorithm.
We want to factor . The strategy is as follows. Recall that if we can find (nontrivial) and with , then we have a good chance of factoring (see the factorization method in Subsection 9.4.1). Choose a random and consider the sequence . If , then this sequence will repeat every terms since . If we can measure the period of this sequence (or a multiple of the period), we will have an such that . We therefore want to design our quantum computer so that when we make a measurement on the output, we’ll have a high chance of obtaining the period.
We need a technique for finding the period of a periodic sequence. Classically, Fourier transforms can be used for this purpose, and they can be used in the present situation, too. Suppose we have a sequence
of length , for some integer . Define the Fourier transform to be
where .
For example, consider the sequence
of length 8 and period 4. The length divided by the period is the frequency, namely 2, which is how many times the sequence repeats. The Fourier transform takes the values
For example, letting , we find that
Since , the terms cancel and we obtain . The nonzero values of occur at multiples of 2, which is the frequency.
Let’s consider another example: . The Fourier transform is
Here the nonzero values of are again at the multiples of the frequency.
In general, if the period is a divisor of , then all the nonzero values of will occur at multiples of the frequency (however, a multiple of the frequency could still yield 0). See Exercise 2.
Suppose now that the period isn’t a divisor of . Let’s look at an example. Consider the sequence . It has length 8 and almost has period 3 and frequency 3, but we stopped the sequence before it had a chance to complete the last period. In Figure 25.4, we graph the absolute value of its Fourier transform (these are real numbers, hence easier to graph than the complex values of the Fourier transform). Note that there are peaks at 0, 3, and 5. If we continued to larger values of we would get peaks at . The peaks are spaced at an average distance of 8/3. Dividing the length of the sequence by the average distance yields a period of , which agrees with our intuition.
The fact that there is a peak at 0 is not very surprising. The formula for the Fourier transform shows that the value at 0 is simply the sum of the elements in the sequence divided by the square root of the length of the sequence.
Let’s look at one more example: 1, 0, 0, 0, 0, 1, 0, 0, 0, 0 1, 0, 0, 0, 0, 1. This sequence has 16 terms. Our intuition might say that the period is around 5 and the frequency is slightly more than 3. Figure 25.5 shows the graph of the absolute value of its Fourier transform. Again, the peaks are spaced around 3 apart, so we can say that the frequency is around 3. The period of the original sequence is therefore around 5, which agrees with our intuition.
In the first two examples, the period was a divisor of the length (namely, 8) of the sequence. We obtained nonzero values of the Fourier transform only at multiples of the frequency. In these last two examples, the period was not a divisor of the length (8 or 16) of the sequence. This introduced some “noise” into the situation. We had peaks at approximate multiples of the frequency and values close to 0 away from these peaks.
The conclusion is that the peaks of the Fourier transform occur approximately at multiples of the frequency, and the period is approximately the number of peaks. This will be useful in Shor’s algorithm.
Choose so that . We start with qubits, all in state 0:
As in the previous section, by changing axes, we can transform the first bit to a linear combination of and , which gives us
We then successively do a similar transformation to the second bit, the third bit, up through the th bit, to obtain the quantum state
Thus all possible states of the qubits are superimposed in this sum. For simplicity of notation, we replace each string of 0s and 1s with its decimal equivalent, so we write
Choose a random number with . We may assume ; otherwise, we have a factor of . The quantum computer computes the function for this quantum state to obtain
(for ease of notation, is used to denote ). This gives a list of all the values of . However, so far we are not any better off than with a classical computer. If we measure the state of the system, we obtain a basic state for some randomly chosen . We cannot even specify which we want to use. Moreover, the system is forced into this state, obliterating all the other values of that have been computed. Therefore, we do not want to measure the whole system. Instead, we measure the value of the second half. Each basic piece of the system is of the form , where represents bits and is represented by bits (since ). If we measure these last bits, we obtain some number , and the whole system is forced into a combination of those states of the form with :
where is whatever factor is needed to make the vector have length 1 (in fact, is the square root of the number of terms in the sum).
At this point, it is probably worthwhile to have an example. Let . (This example might seem simple, but it is the largest that quantum computers using Shor’s algorithm can currently handle. Other algorithms are being developed that can go somewhat farther.) Since , we have . Let’s choose , so we compute the values of to obtain
Suppose we measure the second part and obtain 2. This means we have extracted all the terms of the form to obtain
For notational convenience, and since it will no longer be needed, we drop the second part to obtain
If we now measured this system, we would simply obtain a number such that . This would not be useful.
Suppose we could take two measurements. Then we would have two numbers and with . This would yield . By the factorization method of Subsection 9.4.1, this would give us a good chance of being able to factor . However, we cannot take two independent measurements. The first measurement puts the system into the output state, so the second measurement would simply give the same answer as the first.
Not all is lost. Note that in our example, the numbers in our state are periodic with period 6. In general, the values of are periodic with period , with . So suppose we are able to make a measurement that yields the period. We then have a situation where , so we can hope to factor by the method from Subsection 9.4.1 mentioned above.
The quantum Fourier transform is exactly the tool we need. It measures frequencies, which can be used to find the period. If happens to be a divisor of , then the frequencies we obtain are multiples of a fundamental frequency , and . In general, is not a divisor of , so there will be some dominant frequencies, and they will be approximate multiples of a fundamental frequency with . This will be seen in the analysis of our example and in Figure 25.6.
The quantum Fourier transform is defined on a basic state (with ) by
It extends to a linear combination of states by linearity:
We can therefore apply to our quantum state.
In our example, we compute
and obtain a sum
for some numbers .
The number is given by
which is the discrete Fourier transform of the sequence
Therefore, the peaks of the graph of the absolute value of should correspond to the frequency of the sequence, which should be around . The graph in Figure 25.6 is a plot of .
There are sharp peaks at , , , , , (the ones at 0 and 256 do not show up on the graph since they are centered at one value; see below). These are the dominant frequencies mentioned previously. The values of near the peak at are
The behavior near , , and is similar. At and , we have , while all the nearby values of have .
The peaks are approximately at multiples of the fundamental frequency . Of course, we don’t really know this yet, since we haven’t made any measurements.
Now we measure the quantum state of this Fourier transform. Recall that if we start with a linear combination of states normalized such that , then the probability of of obtaining is . More generally, if we don’t assume , the probability is
In our example,
so if we sample the Fourier transform, the probability is around that we obtain one of , , , . Let’s suppose this is the case; say we get . We know, or at least expect, that 427 is approximately a multiple of the frequency that we’re looking for:
for some . Since , we divide to obtain
Note that . Since we must have , a reasonable guess is that (see the following discussion of continued fractions).
In general, Shor showed that there is a high chance of obtaining a value of with
for some . The method of continued fractions will find the unique (see Exercise 3) value of with satisfying this inequality.
In our example, we take and check that .
We want to use the factorization method of Subsection 9.4.1 to factor 21. Recall that this method writes with odd, and then computes . We then successively square to get , until we reach . If is the last , we compute to get a factor (possibly trivial) of .
In our example, we write (a power of 2 times an odd number) and compute (in the notation of Subsection 9.4.1)
so we obtain .
In general, once we have a candidate for , we check that . If not, we were unlucky, so we start over with a new and form a new sequence of quantum states. If , then we use the factorization method from Subsection 9.4.1. If this fails to factor , start over with a new . It is very likely that, in a few attempts, a factorization of will be found.
We now say more about continued fractions. In Chapter 3, we outlined the method of continued fractions for finding rational numbers with small denominator that approximate real numbers. Let’s apply the procedure to the real number . We have
This yields the approximating rational numbers
Since we know the period in our example is less than , the best guess is the last denominator less than , namely .
In general, we compute the continued fraction expansion of , where is the result of the measurement. Then we compute the approximations, as before. The last denominator less than is the candidate for .
The capabilities of quantum computers and quantum algorithms are of significant importance to economic and government institutions. Many secrets are protected by cryptographic protocols. Quantum cryptography’s potential for breaking these secrets as well as its potential for protecting future secrets has caused this new research field to grow rapidly over the past few years.
Although the first full-scale quantum computer is probably many years off, and there are still many who are skeptical of its possibility, quantum cryptography has already succeeded in transmitting secure messages over a distances of more than 100 km, and quantum computers have been built that can handle a (very) small number of qubits. Quantum computation and cryptography have already changed the manner in which computer scientists and engineers perceive the capabilities and limits of the computer. Quantum computing has rapidly become a popular interdisciplinary research area and promises to offer many exciting new results in the future.
Consider the sequence .
What is the period of this sequence?
Suppose you want to use Shor’s algorithm to factor What value of would you take?
Suppose the measurement in Shor’s algorithm yields . What value do you obtain for ? Does this agree with part (a)?
Use the value of from part (c) to factor 15.
Let . Fix an integer with . Show that
if and if . (Hint: Write with , factor off the sum, and recognize what’s left as a geometric sum.)
Suppose is a sequence of length such that for all . Show that the Fourier transform of this sequence is 0 whenever .
This shows that if the period of a sequence is a divisor of then all the nonzero values of occur at multiples of the frequency (namely, ).
Suppose and are two distinct rational numbers, with and . Show that
Suppose, as in Shor’s algorithm, that we have
Show that .
These computer examples are written in Mathematica. If you have Mathematica available, you should try some of them on your computer. If Mathematica is not available, it is still possible to read the examples. They provide examples for several of the concepts of this book. For information on getting started with Mathematica, see Section A.1. To download a Mathematica notebook that contains these commands, go to
bit.ly/2u5R7dW
Download the Mathematica notebook crypto.nb that you find using the links starting at bit.ly/2u5R7dW
Open Mathematica, and then open crypto.nb using the menu options under File on the command bar at the top of the Mathematica window. (Perhaps this is done automatically when you download it; it depends on your computer settings.)
With crypto.nb in the foreground, click (left button) on Evaluation on the command bar. A menu will appear. Move the arrow down to the line Evaluate Notebook and click (left button). This evaluates the notebook and loads the necessary functions. Ignore any warning messages about spelling. They occur because a few functions have similar names.
Go to the command bar at the top and click on File. Move the arrow down to New and left click. Then left click on Notebook. A new notebook will appear on top of crypto.nb. However, all the commands of crypto.nb will still be working.
If you want to give the new notebook a name, use the File command and scroll down to Save As.... Then save under some name with a .nb at the end.
You are now ready to use Mathematica. If you want to try something easy, type and then press the Shift and Enter keys simultaneously. Or, if your keyboard has a number pad with Enter, probably on the right side of the keyboard, you can press that (without the Shift). The result 1031 should appear (it’s ).
Turn to the Computer Examples Section A.3. Try typing in some of the commands there. The outputs should be the same as that in the examples. Remember to press Shift Enter (or the numeric Enter) to make Mathematica evaluate an expression.
If you want to delete part of your notebook, simply move the arrow to the line at the right edge of the window and click the left button. The highlighted part can be deleted by clicking on Edit on the top command bar, then clicking on Cut on the menu that appears.
Save your notebook by clicking on File on the command bar, then clicking on Save on the menu that appears.
Print your notebook by clicking on File on the command bar, then clicking on Print on the menu that appears. (You will see the advantage of opening a new notebook in Step 4; if you didn’t open one, then all the commands in crypto.nb will also be printed.)
If you make a mistake in typing in a command and get an error message, you can edit the command and hit Shift Enter to try again. You don’t need to retype everything.
Look at the commands available through the command bar at the top. For example, Format then Style allows you to change the type font on any cell that has been highlighted (by clicking on its bar on the right side).
If you are looking for help or a command to do something, try the Help command. Note that the commands that are built into Mathematica always start with capital letters. The commands that are coming from crypto.nb start with small letters and will not be found via Help.
The following are some Mathematica commands that are used in the Computer Examples. The commands that start with capital letters, such as EulerPhi, are built into Mathematica. The ones that start with small letters, such as addell, have been written specially for this text and are in the Mathematica notebook available at
bit.ly/2u5R7dW
addell[{x,y}, {u,v}, b, c, n] finds the sum of the points and on the elliptic curve , where is odd.
affinecrypt[txt,m,n] affine encryption of txt using .
allshifts[txt] gives all 26 shifts of txt.
ChineseRemainder[{a,b,...},{m,n,...}] gives a solution to the simultaneous congruences .
choose[txt,m,n] lists the characters in txt in positions congruent to .
coinc[txt,n] the number of matches between txt and txt displaced by .
corr[v] the dot product of the vector with the 26 shifts of the alphabet frequency vector.
EulerPhi[n] computes (don’t try very large values of ).
ExtendedGCD[m,n] computes the gcd of and along with a solution of .
FactorInteger[n] factors .
frequency[txt] lists the number of occurrences of each letter through in txt.
GCD[m,n] is the gcd of and .
Inverse[M] finds the inverse of the matrix .
lfsr[c,k,n] gives the sequence of bits produced by the recurrence that has coefficients given by the vector . The initial values of the bits are given by the vector .
lfsrlength[v,n] tests the vector of bits to see if it is generated by a recurrence of length at most .
lfsrsolve[v,n] given a guess for the length of the recurrence that generates the binary vector , it computes the coefficients of the recurrence.
Max[v] is the largest element of the vector .
Mod[a,n] is the value of .
multell[{x,y}, m, b, c, n] computes times the point on the elliptic curve .
multsell[{x,y}, m, b, c, n] lists the first multiples of the point on the elliptic curve .
NextPrime[x] gives the .
num2text0[n] changes a number to letters. The successive pairs of digits must each be at most 25; is 00, is 25.
num2text[n] changes a number to letters. The successive pairs of digits must each be at most 26; space is 00, is 01, is 26.
PowerMod[a,b,n] computes .
PrimitiveRoot[p] finds a primitive root for the prime .
shift[txt,n] shifts txt by .
txt2num0[txt] changes txt to numbers, with .
txt2num[txt] changes txt to numbers, with space = 00, .
vigenere[txt,v] gives the Vigenère encryption of txt using the vector .
vigvec[txt,m,n] gives the frequencies of the letters through in positions congruent to .
A shift cipher was used to obtain the ciphertext kddmu. Decrypt it by trying all possibilities.
In[1]:= allshifts[”kddkmu”]
kddkmuleelnvmffmownggnpxohhoqypiiprzqjjqsarkkrtbsllsuctmmtvdunnuwevoovxfwppwygxqqxzhyrryaizsszbjattackbuubdlcvvcemdwwdfnexxegofyyfhpgzzgiqhaahjribbiksjccjlt
As you can see, attack is the only word that occurs on this list, so that was the plaintext.
Encrypt the plaintext message cleopatra using the affine function :
In[2]:=affinecrypt["cleopatra", 7, 8]
Out[2]=whkcjilxi
The ciphertext mzdvezc was encrypted using the affine function . Decrypt it.
SOLUTION
First, solve for to obtain . We need to find the inverse of :
In[3]:= PowerMod[5, -1, 26]
Out[3]= 21
Therefore, . To change to standard form:
In[4]:= Mod[-12*21, 26]
Out[4]= 8
Therefore, the decryption function is . To decrypt the message:
In[5]:= affinecrypt["mzdvezc", 21, 8]
Out[5]= anthony
In case you were wondering, the plaintext was encrypted as follows:
In[6]:= affinecrypt["anthony", 5, 12]
Out[6]= mzdvezc
Here is the example of a Vigenère cipher from the text. Let’s see how to produce the data that was used in Section 2.3 to decrypt it. For convenience, we’ve already stored the ciphertext under the name vvhq.
In[7]:= vvhq
Out[7]=
vvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljs hbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwl wgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfq twlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivw ikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
Find the frequencies of the letters in the ciphertext:
In[8]:= frequency[vvhq]
Out[8]=
{{a, 8}, {b, 5}, {c, 12}, {d, 4}, {e, 15}, {f, 10}, {g, 27}, {h, 16}, {i, 13}, {j, 14}, {k, 17}, {l, 25}, {m, 7}, {n, 7}, {o, 5}, {p, 9}, {q, 14}, {r, 17}, {s, 24}, {t, 8}, {u, 12}, {v, 22}, {w, 22}, {x, 5}, {y, 8}, {z, 5}}
Let’s compute the coincidences for displacements of 1, 2, 3, 4, 5, 6:
In[9]:= coinc[vvhq, 1]
Out[9]= 14
In[10]:= coinc[vvhq, 2]
Out[10]= 14
In[11]:= coinc[vvhq, 3]
Out[11]= 16
In[12]:= coinc[vvhq, 4]
Out[12]= 14
In[13]:= coinc[vvhq, 5]
Out[13]= 24
In[14]:= coinc[vvhq, 6]
Out[14]= 12
We conclude that the key length is probably 5. Let’s look at the 1st, 6th, 11th, ... letters (namely, the letters in positions congruent to 1 mod 5):
In[15]:= choose[vvhq, 5, 1]
Out[15]= vvuttcccqgcunjtpjgkuqpknjkygkkgcjfqrkqjrqudukvpkvggjjivgjggpfncwuce
In[16]:= frequency[%]
Out[16]= {{a, 0}, {b, 0}, {c, 7}, {d, 1}, {e, 1}, {f, 2}, {g, 9}, {h, 0}, {i, 1}, {j, 8}, {k, 8}, {l, 0}, {m, 0}, {n, 3}, {o, 0}, {p, 4}, {q, 5}, {r, 2}, {s, 0}, {t, 3}, {u, 6}, {v, 5}, {w, 1}, {x, 0}, {y, 1}, {z, 0}}
To express this as a vector of frequencies:
In[17]:= vigvec[vvhq, 5, 1]
Out[17]= {0, 0, 0.104478, 0.0149254, 0.0149254, 0.0298507, 0.134328, 0, 0.0149254, 0.119403, 0.119403, 0, 0, 0.0447761, 0, 0.0597015, 0.0746269, 0.0298507, 0, 0.0447761, 0.0895522, 0.0746269, 0.0149254, 0, 0.0149254, 0}
The dot products of this vector with the displacements of the alphabet frequency vector are computed as follows:
In[18]:= corr[%]
Out[18]=
{0.0250149, 0.0391045, 0.0713284, 0.0388209, 0.0274925, 0.0380149, 0.051209, 0.0301493, 0.0324776, 0.0430299, 0.0337761, 0.0298507, 0.0342687, 0.0445672, 0.0355522, 0.0402239, 0.0434328, 0.0501791, 0.0391791, 0.0295821, 0.0326269, 0.0391791, 0.0365522, 0.0316119, 0.0488358, 0.0349403}
The third entry is the maximum, but sometimes the largest entry is hard to locate. One way to find it is
In[19]:= Max[%]
Out[19]= 0.0713284
Now it is easy to look through the list and find this number (it usually occurs only once). Since it occurs in the third position, the first shift for this Vigenère cipher is by 2, corresponding to the letter c. A procedure similar to the one just used (using vigvec[vvhq, 5,2],..., vigvec[vvhq,5,5]) shows that the other shifts are probably 14, 3, 4, 18. Let’s check that we have the correct key by decrypting.
In[20]:= vigenere[vvhq, -{2, 14, 3, 4, 18}]
Out[20]=
themethodusedforthepreparationandreadingofcodemessagesissimpleinthe extremeandatthesametimeimpossibleoftranslationunlessthekeyisknownth eeasewithwhichthekeymaybechangedisanotherpointinfavoroftheadoptiono fthiscodebythosedesiringtotransmitimportantmessageswithouttheslight estdangeroftheirmessagesbeingreadbypoliticalorbusinessrivalsetc
For the record, the plaintext was originally encrypted by the command
In[21]:= vigenere[%, {2, 14, 3, 4, 18}]
Out[21]=
vvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljs hbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwl wgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfq twlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivw ikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
Find .
In[1]:= GCD[23456, 987654]
Out[1]= 2
Solve in integers .
In[2]:= ExtendedGCD[23456, 987654]
Out[2]= {2, {-3158, 75}}
This means that 2 is the gcd and .
Compute .
In[3]:= Mod[234*456, 789]
Out[3]= 189
Compute .
In[4]:= PowerMod[234567, 876543, 565656565]
Out[4]= 473011223
Find the multiplicative inverse of .
In[5]:= PowerMod[87878787, -1, 9191919191]
Out[5]= 7079995354
Solve .
SOLUTION
Here is one way. It corresponds to the method in Section 3.3. We calculate and then multiply it by 2389:
In[6]:= PowerMod[7654, -1, 65537]
Out[6]= 54637
In[7]:= Mod[%*2389, 65537]
Out[7]= 43626
Find with
SOLUTION
In[8]:= ChineseRemainder[{2, 5, 1}, {78, 97, 119}]
Out[8]= 647480
We can check the answer:
In[9]:= Mod[647480, {78, 97, 119}]
Out[9]= {2, 5, 1}
Factor 123450 into primes.
In[10]:= FactorInteger[123450]
Out[10]= {{2, 1}, {3, 1}, {5, 2}, {823, 1}}
This means that .
Evaluate .
In[11]:= EulerPhi[12345]
Out[11]= 6576
Find a primitive root for the prime 65537.
In[12]:= PrimitiveRoot[65537]
Out[12]= 3
Therefore, 3 is a primitive root for 65537.
Find the inverse of the matrix .
SOLUTION
First, invert the matrix without the mod:
In[13]:= Inverse[{{13, 12, 35}, {41, 53, 62}, {71, 68, 10}}]
Out[13]=
We need to clear the 34139 out of the denominator, so we evaluate 1/34139 mod 999:
In[14]:= PowerMod[34139, -1, 999]
Out[14]= 410
Since , we multiply the inverse matrix by and reduce mod 999 in order to remove the denominators without changing anything mod 999:
In[15]:= Mod[410*34139*%%, 999]
Out[15]= {{772, 472, 965}, {641, 516, 851}, {150, 133, 149}}
Therefore, the inverse matrix mod 999 is .
In many cases, it is possible to determine by inspection the common denominator that must be removed. When this is not the case, note that the determinant of the original matrix will always work as a common denominator.
Find a square root of 26951623672 mod the prime .
SOLUTION
Since , we can use the Proposition of Section 3.9:
In[16]:= PowerMod[26951623672, (98573007539 + 1)/4, 98573007539]
Out[16]= 98338017685
The other square root is minus this one:
In[17]:= Mod[-%, 98573007539]
Out[17]= 234989854
Let . Find all four solutions of .
SOLUTION
First, find a square root mod each of the two prime factors, both of which are congruent to :
In[18]:= PowerMod[19101358, (9803 + 1)/4, 9803]
Out[18]= 3998
In[19]:= PowerMod[19101358, (3491 + 1)/4, 3491]
Out[19]= 1318
Therefore, the square roots are congruent to and are congruent to . There are four ways to combine these using the Chinese remainder theorem:
In[20]:= ChineseRemainder[ {3998, 1318 }, {9803, 3491 }]
Out[20]= 43210
In[21]:= ChineseRemainder[ {-3998, 1318 }, {9803, 3491 }]
Out[21]= 8397173
In[22]:= ChineseRemainder[ {3998, -1318 }, {9803, 3491 }]
Out[22]= 25825100
In[23]:= ChineseRemainder[ {-3998, -1318}, {9803, 3491}]
Out[23]= 34179063
These are the four desired square roots.
Compute the first 50 terms of the recurrence
The initial values are .
SOLUTION
The vector of coefficients is and the initial values are given by the vector . Type
In[1]:= lfsr[{1, 0, 1, 0, 0}, {0, 1, 0, 0, 0}, 50]
Out[1]= {0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1}
Suppose the first 20 terms of an LFSR sequence are 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1. Find a recurrence that generates this sequence.
SOLUTION
First, we find the length of the recurrence. The command lfsrlength[v, n] calculates the determinants mod 2 of the first matrices that appear in the procedure in Section 5.2:
In[2]:=
lfsrlength[{1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1}, 10]{1, 1}{2, 1}{3, 0}{4, 1}{5, 0}{6, 1}{7, 0}{8, 0}{9, 0}{10, 0}
The last nonzero determinant is the sixth one, so we guess that the recurrence has length 6. To find the coefficients:
In[3]:= lfsrsolve[{1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1}, 6]
Out[3]= {1, 0, 1, 1, 1, 0}
This gives the recurrence as
The ciphertext 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0 was produced by adding the output of a LFSR onto the plaintext mod 2 (i.e., XOR the plaintext with the LFSR output). Suppose you know that the plaintext starts 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0. Find the rest of the plaintext.
SOLUTION
XOR the ciphertext with the known part of the plaintext to obtain the beginning of the LFSR output:
In[4]:= Mod[{1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0} + {0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1}, 2]
Out[4]= {1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1}
This is the beginning of the LFSR output. Now let’s find the length of the recurrence:
In[5]:= lfsrlength[%, 8]
{1, 1}{2, 0}{3, 1}{4, 0}{5, 1}{6, 0}{7, 0}{8, 0}
We guess the length is 5. To find the coefficients of the recurrence:
In[6]:= lfsrsolve[%%, 5]
Out[6]= {1, 1, 0, 0, 1}
Now we can generate the full output of the LFSR using the coefficients we just found plus the first five terms of the LFSR output:
In[7]:= lfsr[{1, 1, 0, 0, 1}, {1, 0, 0, 1, 0}, 40]
Out[7]={1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0}
When we XOR the LFSR output with the ciphertext, we get back the plaintext:
In[8]:= Mod[% + {0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0}, 2]
Out[8]= {1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0}
This is the plaintext.
The ciphertext
was encrypted using a Hill cipher with matrix
Decrypt it.
SOLUTION
A matrix is entered as . Type to multiply matrices and . Type to multiply a vector on the right by a matrix .
First, we need to invert the matrix mod 26:
In[1]:= Inverse[{{ 1,2,3},{ 4,5,6},{7,8,10}}]
Out[1]= {{-, -, 1}, {,, -2}, {1, -2, 1}}
Since we are working mod 26, we can’t stop with numbers like 2/3. We need to get rid of the denominators and reduce mod 26. To do so, we multiply by 3 to extract the numerators of the fractions, then multiply by the inverse of 3 mod 26 to put the “denominators” back in (see Section 3.3):
In[2]:= %*3
Out[2]= {{-2, -4, 3}, {-2, 11, -6}, {3, -6, 3}}
In[3]:= Mod[PowerMod[3, -1, 26]*%, 26]
Out[3]= {{8,16,1}, {8,21,24}, {1,24,1}}
This is the inverse of the matrix mod 26. We can check this as follows:
In[4]:= Mod[%.{{1, 2, 3}, {4, 5, 6}, {7, 8, 10}}, 26]
Out[4]= {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
To decrypt, we break the ciphertext into blocks of three numbers and multiply each block on the right by the inverse matrix we just calculated:
In[5]:= Mod[{22, 09, 00}.%%, 26]
Out[5]= {14, 21, 4}
In[6]:= Mod[{12, 03, 01}.%%%, 26]
Out[6]= {17, 19, 7}
In[7]:= Mod[{10, 03, 04}.%%%%, 26]
Out[7]= {4, 7, 8}
In[8]:= Mod[{08, 01, 17}.%%%%%, 26]
Out[8]= {11, 11, 23}
Therefore, the plaintext is 14, 21, 4, 17, 19, 7, 4, 7, 8, 11, 11, 23. This can be changed back to letters:
In[9]:= num2txt0[142104171907040708111123]
Out[9]= overthehillx
Note that the final x was appended to the plaintext in order to complete a block of three letters.
Suppose you need to find a large random prime of 50 digits. Here is one way. The function NextPrime[x] finds the next prime greater than . The function Random[Integer,{a,b}] gives a random integer between and . Combining these, we can find a prime:
In[1]:= NextPrime[Random[Integer, {, }]]
Out[1]= 73050570031667109175215303340488313456708913284291
If we repeat this procedure, we should get another prime:
In[2]:= NextPrime[Random[Integer, {, }]]
Out[2]= 97476407694931303255724326040586144145341054568331
Suppose you want to change the text hellohowareyou to numbers:
In[3]:= txt2num1["hellohowareyou"]
Out[3]= 805121215081523011805251521
Note that we are now using , since otherwise a’s at the beginnings of messages would disappear. (A more efficient procedure would be to work in base 27, so the numerical form of the message would be . Note that this uses fewer digits.)
Now suppose you want to change it back to letters:
In[4]:= num2txt1[805121215081523011805251521]
Out[4]= hellohowareyou
Encrypt the message hi using RSA with and .
SOLUTION
First, change the message to numbers:
In[5]:= txt2num1["hi"]
Out[5]= 809
Now, raise it to the :
In[6]:= PowerMod[%, 17, 823091]
Out[6]= 596912
Decrypt the ciphertext in the previous problem.
SOLUTION
First, we need to find the decryption exponent . To do this, we need to find . One way is as follows:
In[7]:= EulerPhi[823091]
Out[7]= 821184
Another way is to factor as and then compute :
In[8]:= FactorInteger[823091]
Out[8]= { {659, 1 }, {1249, 1 } }
In[9]:= 658*1248
Out[9]= 821184
Since , we compute the following (note that we are finding the inverse of , not ):
In[10]:= PowerMod[17, -1, 821184]
Out[10]= 48305
Therefore, . To decrypt, raise the ciphertext to the :
In[11]:= PowerMod[596912, 48305, 823091]
Out[11]= 809
Finally, change back to letters:
In[12]:= num2txt1[809]
Out[12]= hi
Encrypt hellohowareyou using RSA with and .
SOLUTION
First, change the plaintext to numbers:
In[13]:= txt2num1["hellohowareyou"]
Out[13]= 805121215081523011805251521
Suppose we simply raised this to the :
In[14]:= PowerMod[%, 17, 823091]
Out[14]= 447613
If we decrypt (we know from Example 25), we obtain
In[15]:= PowerMod[%, 48305, 823091]
Out[15]= 628883
This is not the original plaintext. The reason is that the plaintext is larger than , so we have obtained the plaintext :
In[16]:= Mod[805121215081523011805251521, 823091]
Out[16]= 628883
We need to break the plaintext into blocks, each less than . In our case, we use three letters at a time:
In[17]:= PowerMod[80512, 17, 823091]
Out[17]= 757396
In[18]:= PowerMod[121508, 17, 823091]
Out[18]= 164513
In[19]:= PowerMod[152301, 17, 823091]
Out[19]= 121217
In[20]:= PowerMod[180525, 17, 823091]
Out[20]= 594220
In[21]:= PowerMod[1521, 17, 823091]
Out[21]= 442163
The ciphertext is therefore 757396164513121217594220442163. Note that there is no reason to change this back to letters. In fact, it doesn’t correspond to any text with letters.
Decrypt each block individually:
In[22]:= PowerMod[757396, 48305, 823091]
Out[22]= 80512
In[23]:= PowerMod[164513, 48305, 823091]
Out[23]= 121508
Etc.
We’ll now do some examples with large numbers, namely the numbers in the RSA Challenge discussed in Section 9.5. These are stored under the names rsan, rsae, rsap, rsaq:
In[24]:= rsan
Out[24]=
114381625757888867669235779976146612010218296721242362562561842935 706935245733897830597123563958705058989075147599290026879543541
In[25]:= rsae
Out[25]= 9007
Encrypt each of the messages b, ba, bar, bard using rsan and rsae.
In[26]:= PowerMod[num1["b"], rsae, rsan]
Out[26]=
709467584676126685983701649915507861828763310606852354105647041144 86782261716497200122155332348462014053287987580899263765142534
In[27]:= PowerMod[txt2num1["ba"], rsae, rsan]
Out[27]=
350451306089751003250117094498719542737882047539485930603136976982 27621759806027962270538031565564773352033671782261305796158951
In[28]:= PowerMod[txt2num1["bar"], rsae, rsan]
Out[28]=
448145128638551010760045308594921093424295316066074090703605434080 00843645986880405953102818312822586362580298784441151922606424
In[29]:= PowerMod[txt2num1["bard"], rsae, rsan]
Out[29]=
242380777851116664232028625120903173934852129590562707831349916142 56054323297179804928958073445752663026449873986877989329909498
Observe that the ciphertexts are all the same length. There seems to be no easy way to determine the length of the corresponding plaintext.
Using the factorization , find the decryption exponent for the RSA Challenge, and decrypt the ciphertext (see Section 9.5).
SOLUTION
First we find the decryption exponent:
In[30]:=rsad=PowerMod[rsae,-1,(rsap-1)*(rsaq-1)];
Note that we use the final semicolon to avoid printing out the value. If you want to see the value of rsad, see Section 9.5, or don’t use the semicolon. To decrypt the ciphertext, which is stored as rsaci, and change to letters:
In[31]:=num2txt1[PowerMod[rsaci, rsad, rsan]]
Out[31]= the magic words are squeamish ossifrage
Encrypt the message rsaencryptsmessageswell using rsan and rsae.
In[32]:= PowerMod[txt2num1["rsaencryptsmessageswell"], rsae, rsan]
Out[32]=
946394203490022593163058235392494964146409699340017097214043524182 71950654254365584906013966328817753539283112653197553130781884
Decrypt the preceding ciphertext.
SOLUTION
Fortunately, we know the decryption exponent rsad. Therefore, we compute
In[33]:= PowerMod[%, rsad, rsan]
Out[33]= 1819010514031825162019130519190107051923051212
In[34]:= num2txt1[%]
Out[34]= rsaencryptsmessageswell
Suppose we lose the final 4 of the ciphertext in transmission. Let’s try to decrypt what’s left (subtracting 4 and dividing by 10 is a mathematical way to remove the 4):
In[35]:= PowerMod[(%%% - 4)/10, rsad, rsan]
Out[35]=
479529991731959886649023526295254864091136338943756298468549079705 88412300373487969657794254117158956921267912628461494475682806
If we try to change this to letters, we get a long error message. A small error in the plaintext completely changes the decrypted message and usually produces garbage.
Suppose we are told that is the product of two primes and that . Factor .
SOLUTION
We know (see Section 9.1) that and are the roots of . Therefore, we compute
In[36]:= Roots[ - (11313771275590312567 - 11313771187608744400 + 1)*X + 11313771275590312567 == 0, X]
Out[36]=
Therefore, . We also could have used the quadratic formula to find the roots.
Suppose we know rsae and rsad. Use these to factor rsan.
SOLUTION
We use the factorization method from Section 9.4. First write with odd. One way to do this is first to compute , then keep dividing by 2 until we get an odd number:
In[37]:= rsae*rsad - 1
Out[37]=
961034419617782266156919023359583834109854129051878330250644604041 155985575087352659156174898557342995131594680431086921245830097664
In[38]:= %/2
Out[38]=
480517209808891133078459511679791917054927064525939165125322302020 577992787543676329578087449278671497565797340215543460622915048832
In[39]:= %/2
Out[39]=
240258604904445566539229755839895958527463532262969582562661151010 288996393771838164789043724639335748782898670107771730311457524416
We continue this way for six more steps until we get
Out[45]=
375404070163196197717546493499837435199161769160889972754158048453 5765568652684971324828808197489621074732791720433933286116523819
This number is . Now choose a random integer . Hoping to be lucky, we choose 13. As in the factorization method, we compute
In[46]:= PowerMod[13, %, rsan]
Out[46]=
275743685070065305922434948688471611984230957073078056905698396470 30183109839862370800529338092984795490192643587960859870551239
Since this is not , we successively square it until we get :
In[47]:= PowerMod[%, 2, rsan]
Out[47]=
483189603219285155801384764187230345541040990699408462254947027766 54996412582955636035266156108686431194298574075854037512277292
In[48]:= PowerMod[%, 2, rsan]
Out[48]=
781728141548773565791419280587540000219487870564838209179306251152 15181839742056013275521913487560944732073516487722273875579363
In[49]:= PowerMod[%, 2, rsan]
Out[49]=
428361912025087287421992990405829002029762229160177671675518702165 09444518239462186379470569442055101392992293082259601738228702
In[50]:= PowerMod[%, 2, rsan]
Out[50]= 1
Since the last number before the 1 was not , we have an example of with . Therefore, is a nontrivial factor of rsan:
In[51]:= GCD[%% - 1, rsan]
Out[51]=
32769132993266709549961988190834461413177642967992942539798288533
This is rsaq. The other factor is obtained by computing rsan/rsaq:
In[52]:= rsan/%
Out[52]=
3490529510847650949147849619903898133417764638493387843990820577
This is rsap.
Suppose you know that
Factor 205611444308117.
SOLUTION
We use the Basic Principle of Section 9.4.
In[53]:= GCD[150883475569451-16887570532858,205611444308117]
Out[53]= 23495881
This gives one factor. The other is
In[54]:= 205611444308117/%
Out[54]= 8750957
We can check that these factors are actually primes, so we can’t factor any further:
In[55]:= PrimeQ[%%]
Out[55]= True
In[56]:= PrimeQ[%%]
Out[56]= True
Factor by the method.
SOLUTION
Let’s choose our bound as , and let’s take , so we compute :
In[57]:= PowerMod[2,Factorial[100],37687557542639485559998999289787 3239]
Out[57]= 369676678301956331939422106251199512
Then we compute the gcd of and :
In[58]:= GCD[% - 1, 376875575426394855599989992897873239]
Out[58]= 430553161739796481
This is a factor . The other factor is
In[59]:= 376875575426394855599989992897873239/%
Out[59]= 875328783798732119
Let’s see why this worked. The factorizations of and are
In[60]:= FactorInteger[430553161739796481 - 1]
Out[60]= {{2, 18 }, {3, 7 }, {5, 1 }, {7, 4 }, {11, 3 }, {47, 1 }}
In[61]:= FactorInteger[875328783798732119 - 1]
Out[61]= {{2, 1 }, {61, 1 }, {20357, 1 }, {39301, 1 }, {8967967, 1 }}
We see that is a multiple of , so . However, is not a multiple of , so it is likely that . Therefore, both and have as a factor, but only has as a factor. It follows that the gcd is .
Let’s solve the discrete log problem by the Baby Step-Giant Step method of Subsection 10.2.2. We take since and we form two lists. The first is for :
In[1]:=Do[Print[n, ” ” , PowerMod[2, n, 131]], {n, 0, 11}]
Out[1]= 0 1
1 22 43 84 165 326 647 1288 1259 11910 10711 83
The second is for :
In[2]:=Do[Print[n, ” ” , Mod[71*PowerMod[2, -12*n, 131], 131]], {n, 0, 11}]
Out[2]= > 0 71
1 172 1243 264 1285 866 1117 938 859 9610 13011 116
The number 128 is on both lists, so we see that . Therefore,
Suppose there are 23 people in a room. What is the probability that at least two have the same birthday?
SOLUTION
The probability that no two have the same birthday is (note that the product stops at , not ). Subtracting from 1 gives the probability that at least two have the same birthday:
In[1]:= 1 - Product[1. - i/365, {i, 22}]
Out[1]= 0.507297
Note that we used 1. in the product instead of 1 without the decimal point. If we had omitted the decimal point, the product would have been evaluated as a rational number (try it, you’ll see).
Suppose a lazy phone company employee assigns telephone numbers by choosing random seven-digit numbers. In a town with 10,000 phones, what is the probability that two people receive the same number?
In[2]:= 1 - Product[1. - i/, {i, 9999}]
Out[2]= 0.99327
Note that the number of phones is about three times the square root of the number of possibilities. This means that we expect the probability to be high, which it is. From Section 12.1, we have the estimate that if there are around phones, there should be a 50% chance of a match. Let’s see how accurate this is:
In[3]:= 1 - Product[1. - i/, i, 3722]
Out[3]= 0.499895
Suppose we have a (5, 8) Shamir secret sharing scheme. Everything is mod the prime . Five of the shares are
Find the secret.
SOLUTION
One way: First, find the Lagrange interpolating polynomial through the five points:
In[1]:= InterpolatingPolynomial[ { {9853, 853 }, {4421, 4387 }, {6543, 1234 }, {93293, 78428 }, {12398, 7563 } }, x]
Out[1]=
Now evaluate at to find the constant term (use to evaluate at ):
In[2]:= %/. x - > 0
Out[2]=
We need to change this to an integer mod 987541, so we find the multiplicative inverse of the denominator:
In[3]:= PowerMod[Denominator[%], -1, 987541]
Out[3]= 509495
Now, multiply times the numerator to get the desired integer:
In[4]:= Mod[Numerator[%%]*%, 987541]
Out[4]= 678987
Therefore, 678987 is the secret.
Here is a game you can play. It is essentially the simplified version of poker over the telephone from Section 18.2. There are five cards: ten, jack, queen, king, ace. They are shuffled and disguised by raising their numbers to a random exponent mod the prime 24691313099. You are supposed to guess which one is the ace. To start, pick a random exponent. We use the semicolon after khide so that we cannot cheat and see what value of is being used.
In[1]:= k = khide;
Now, shuffle the disguised cards (their numbers are raised to the and then randomly permuted):
In[2]:= shuffle
Out[2]= {14001090567, 16098641856, 23340023892, 20919427041, 7768690848}
These are the five cards (yours will differ from these because the and the random shuffle will be different). None looks like the ace; that’s because their numbers have been raised to powers mod the prime. Make a guess anyway. Let’s see if you’re correct.
In[3]:= reveal[%]
Out[3]= {ten, ace, queen, jack, king}
Let’s play again:
In[4]:= k = khide;
In[5]:= shuffle
Out[5]= {13015921305, 14788966861, 23855418969, 22566749952, 8361552666}
Make your guess (note that the numbers are different because a different random exponent was used). Were you lucky?
In[6]:= reveal[%]
Out[6]= {ten, queen, ace, king, jack}
Perhaps you need some help. Let’s play one more time:
In[7]:= k = khide;
In[8]:= shuffle
Out[8]= {13471751030, 20108480083, 8636729758, 14735216549, 11884022059}
We now ask for advice:
In[9]:= advise[%]
Out[9]= 3
We are advised that the third card is the ace. Let’s see (note that %% is used to refer to the next to last output):
In[10]:= reveal[%%]
Out[10]= {jack, ten, ace, queen, king}
How does this work? Read the part on “How to Cheat” in Section 18.2. Note that if we raise the numbers for the cards to the , we get
In[11]:= PowerMod[{200514, 10010311, 1721050514, 11091407, 10305}, (24691313099 - 1)/ 2, 24691313099]
Out[11]= {1, 1, 1, 1, 24691313098}
Therefore, only the ace is a quadratic nonresidue .
All of the elliptic curves we work with in this chapter are elliptic curves . However, it is helpful to use the graphs of elliptic curves with real numbers in order to visualize what is happening with the addition law, for example, even though such pictures do not exist . Therefore, let’s graph the elliptic curve . We’ll specify that and :
In[1]:= ContourPlot[ == x*(x - 1)*(x + 1), {x, -1, 3 }, {y, -5, 5 }]
Add the points (1, 3) and (3, 5) on the elliptic curve .
In[2]:= addell[ {1, 3 }, {3, 5 }, 24, 13, 29]
Out[2]= {26, 1 }
You can check that the point (26, 1) is on the curve: .
Add (1, 3) to the point at infinity on the curve of the previous example.
In[3]:= addell[ {1, 3 }, {”infinity”, ”infinity” }, 24, 13, 29]
Out[3]= {1, 3 }
As expected, adding the point at infinity to a point returns the point .
Let be a point on the elliptic curve . Find .
In[4]:= multell[ {1, 3 }, 7, 24, 13, 29]
Out[4]= {15, 6 }
Find for on the curve of the previous example.
In[5]:= multsell[ {1, 3 }, 40, 24, 13, 29]
Out[5]= {1,{1,3},2,{11,10},3,{23,28},4,{0,10},5,{19,7},6,{18,19},7, {15,6},8,{20,24},9,{4,12},10,{4,17},11,{20,5},12,{15,23},13,{18,10}, 14,{19,22},15,{0,19}, 16,{23,1},17,{11,19},18,{1,26},19, {infinity,infinity},20,{1,3},21,{11,10}, 22,{23,28},23,{0,10}, 24,{19,7}, 25,{18,19},26,{15,6},27,{20,24},28,{4,12},29,{4,17}, 30,{20,5},31,{15,23},32,{18,10},33,{19,22}, 34,{0,19},35,{23,1},36, {11,19},37,{1,26}, 38,{infinity,infinity},39,{1,3},40,{11,10}}
Notice how the points repeat after every 19 multiples.
The previous four examples worked mod the prime 29. If we work mod a composite number, the situation at infinity becomes more complicated since we could be at infinity mod both factors or we could be at infinity mod one of the factors but not mod the other. Therefore, we stop the calculation if this last situation happens and we exhibit a factor. For example, let’s try to compute , where is on the elliptic curve :
In[6]:= multell[ {1, 3 }, 12, -5, 13, 11*19]
Out[6]= {factor=, 19 }
Now let’s compute the successive multiples to see what happened along the way:
In[7]:= multsell[ {1, 3 }, 12, -5, 13, 11*19]
Out[7]= 1,{{1,3},2,{91,27},3,{118,133},4,{148,182},5,{20,35}, 6,{factor=, 19}}
When we computed , we ended up at infinity mod 19. Let’s see what is happening mod the two prime factors of 209, namely 19 and 11:
In[8]:= multsell[{1,3}, 12, -5, 13, 19]
Out[8]= 1,{{1,3},2,{15,8},3,{4,0},4,{15,11},5,{1,16}, 6,{infinity,infinity}, 7,{1,3},8,{15,8},9,{4,0},10,{15,11}, 11,{1,16}, 12,{infinity,infinity}}
In[9]:= multsell[ {1, 3 }, 20, -5, 13, 11]
Out[9]= 1,{{1,3},2,{3,5},3,{8,1},4,{5,6},5,{9,2},6,{6,10},7,{2,0}, 8,{6,1},9,{9,9},10,{5,5},11,{8,10},12,{3,6},13,{1,8}, 14,{infinity,infinity},15,{1,3}, 16,{3,5},17,{8,1},18,{5,6}, 19,{9,2},20,{6,10}}
After six steps, we were at infinity mod 19, but it takes 14 steps to reach infinity mod 11. To find , we needed to invert a number that was 0 mod 19 and nonzero mod 11. This couldn’t be done, but it yielded the factor 19. This is the basis of the elliptic curve factorization method.
Factor 193279 using elliptic curves.
SOLUTION
First, we need to choose some random elliptic curves and a point on each curve. For example, let’s take and the elliptic curve
For to lie on the curve, we take . We’ll also take
Now we compute multiples of the point . We do the analog of the method, so we choose a bound , say , and compute .
In[10]:= multell[{2,4}, Factorial[12], -10, 28, 193279]
Out[10]= {factor=, 347}
In[11]:= multell[{1,1}, Factorial[12], 11, -11, 193279]
Out[11]= {13862, 35249}
In[12]:= multell[{1, 2}, Factorial[12], 17, -14, 193279]
Out[12]= {factor=, 557}
Let’s analyze in more detail what happened in these examples.
On the first curve, ends up at infinity mod 557 and is infinity mod 347. Since , it has a prime factor larger than , so is not infinity mod 557. But 35 divides , so is infinity mod 347.
On the second curve, mod 347 and mod 557. Since and , we don’t expect to find the factorization with this curve.
The third curve is a surprise. We have mod 347 and mod 557. Since is prime and , we don’t expect to find the factorization with this curve. However, by chance, an intermediate step in the calculation of yielded the factorization. Here’s what happened. At one step, the program required adding the points (184993, 13462) and (20678, 150484). These two points are congruent mod 557 but not mod 347. Therefore, the slope of the line through these two points is defined mod 347 but is 0/0 mod 557. When we tried to find the multiplicative inverse of the denominator mod 193279, the gcd algorithm yielded the factor 557. This phenomenon is fairly rare.
Here is how to produce the example of an elliptic curve ElGamal cryptosystem from Section 21.5. For more details, see the text. The elliptic curve is and the point is . Alice’s message is the point .
Bob has chosen his secret random number and has computed :
In[13]:= multell[{4, 11}, 3, 3, 45, 8831]
Out[13]= {413, 1808}
Bob publishes this point. Alice chooses the random number and computes and :
In[14]:= multell[{4, 11}, 8, 3, 45, 8831]
Out[14]= {5415, 6321}
In[15]:= addell[{5, 1743}, multell[{413, 1808}, 8, 3, 45, 8831], 3, 45, 8831]
Out[15]= {6626, 3576}
Alice sends (5415, 6321) and (6626, 3576) to Bob, who multiplies the first of these points by :
In[16]:= multell[{5415, 6321}, 3, 3, 45, 8831]
Out[16]= {673, 146}
Bob then subtracts the result from the last point Alice sends him. Note that he subtracts by adding the point with the second coordinate negated:
In[17]:= addell[{6626, 3576}, {673, -146}, 3, 45, 8831]
Out[17]= {5, 1743}
Bob has therefore received Alice’s message.
Let’s reproduce the numbers in the example of a Diffie-Hellman key exchange from Section 21.5: The elliptic curve is and the point is . Alice chooses her secret and Bob chooses his secret . Alice calculates
In[18]:= multell[{3, 5}, 12, 1, 7206, 7211]
Out[18]= {1794, 6375}
She sends (1794,6375) to Bob. Meanwhile, Bob calculates
In[19]:= multell[{3, 5}, 23, 1, 7206, 7211]
Out[19]= {3861, 1242}
and sends (3861,1242) to Alice. Alice multiplies what she receives by and Bob multiplies what he receives by :
In[20]:= multell[{3861, 1242}, 12, 1, 7206, 7211]
Out[20]= {1472, 2098}
In[21]:= multell[{1794, 6375}, 23, 1, 7206, 7211]
Out[21]= {1472, 2098}
Therefore, Alice and Bob have produced the same key.
These computer examples are written in Maple. If you have Maple available, you should try some of them on your computer. If Maple is not available, it is still possible to read the examples. They provide examples for several of the concepts of this book. For information on getting started with Maple, see Section B.1. To download a Maple notebook that contains the necessary commands, go to
bit.ly/2TzKFec
Download the Maple notebook math.mws that you find using the links starting at bit.ly/2TzKFec
Open Maple (on a Linux machine, use the command xmaple; on most other systems, click on the Maple icon)), then open math.mws using the menu options under File on the command bar at the top of the Maple window. (Perhaps this is done automatically when you download it; it depends on your computer settings.)
With math.mws in the foreground, press the Enter or Return key on your keyboard. This will load the functions and packages needed for the following examples.
You are now ready to use Maple. If you want to try something easy, type and then press the Return/Enter key. The result 1031 should appear (it’s ).
Go to the Computer Examples in Section B.3. Try typing in some of the commands there. The outputs should be the same as those in the examples. Press the Return or Enter key to make Maple evaluate an expression.
If you make a mistake in typing in a command and get an error message, you can edit the command and hit Return or Enter to try again. You don’t need to retype everything.
If you are looking for help or a command to do something, try the Help menu on the command bar at the top. If you can guess the name of a function, there is another way. For example, to obtain information on gcd, type ?gcd and Return or Enter.
The following are some Maple commands that are used in the examples. Some, such as phi, are built into Maple. Others, such as addell, are in the Maple notebook available at
bit.ly/2TzKFec
If you want to suppress the output, use a colon instead.
The argument of a function is enclosed in round parentheses. Vectors are enclosed in square brackets. Entering matrix(m,n,[a,b,c,...,z]) gives the matrix with first row a,b, ... and last row ...z. To multiply two matrices and type evalm(A&*B).
If you want to refer to the previous output, use %. The next-to-last output is %%, etc. Note that % refers to the most recent output, not to the last displayed line. If you will be referring to an output frequently, it might be better to name it. For example, g:=phi(12345) defines g to be the value of Note that when you are assigning a value to a variable in this way, you should use a colon before the equality sign. Leaving out the colon is a common cause of hard-to-find errors.
Exponentiation is written as However, we will need to use modular exponentiation with very large exponents. In that case, use For modular exponentiation, you might need to use a between & and . Use the right arrow to escape from the exponent.
Some of the following commands require certain Maple packages to be loaded via the commands
with(numtheory), with(linalg), with(plots), with(combinat)
These are loaded when the math.mws notebook is loaded. However, if you want to use a command such as nextprime without loading the notebook, first type with(numtheory): to load the package (once for the whole session). Then you can use functions such as nextprime, isprime, etc. If you type with(numtheory) without the colon, you’ll get a list of the functions in the package, too.
The following are some of the commands used in the examples. We list them here for easy reference. To see how to use them, look at the examples. We have used txt to refer to a string of letters. Such strings should be enclosed in quotes ("string").
addell([x,y], [u,v], b, c, n) finds the sum of the points and on the elliptic curve The integer should be odd.
affinecrypt(txt,m,n) is the affine encryption of txt using
allshifts(txt) gives all 26 shifts of txt.
chrem([a,b,...], [m,n,...]) gives a solution to the simultaneous congruences
choose(txt,m,n) lists the characters in txt in positions that are congruent to
coinc(txt,n) is the number of matches between txt and txt displaced by
corr(v) is the dot product of the vector with the 26 shifts of the alphabet frequency vector.
phi(n) computes (don’t try very large values of ).
igcdex(m,n,’x’,’y’) computes the gcd of and along with a solution of To get x and y, type x;y on this or a subsequent command line.
ifactor(n) factors
frequency(txt) lists the number of occurrences of each letter a through z in txt.
gcd(m,n) is the gcd of and
inverse(M) finds the inverse of the matrix
lfsr(c,k,n) gives the sequence of bits produced by the recurrence that has coefficients given by the vector The initial values of the bits are given by the vector
lfsrlength(v,n) tests the vector of bits to see if it is generated by a recurrence of length at most
lfsrsolve(v,n) computes the coefficients of a recurrence, given a guess for the length of the recurrence that generates the binary vector
max(v) is the largest element of the list
a mod n is the value of
multell([x,y], m, b, c, n) computes times the point on the elliptic curve
multsell([x,y], m, b, c, n) lists the first multiples of the point on the elliptic curve
nextprime(x) gives the next prime
num2text(n) changes a number to letters. The successive pairs of digits must each be at most 26 space is 00, is 01, is 26.
primroot(p) finds a primitive root for the prime
shift(txt,n) shifts txt by
text2num(txt) changes txt to numbers, with space=00, a=01, ..., z=25.
vigenere(txt,v) gives the Vigenère encryption of txt using the vector as the key.
vigvec(txt,m,n) gives the frequencies of the letters a through z in positions congruent to
A shift cipher was used to obtain the ciphertext kddkmu. Decrypt it by trying all possibilities.
> allshifts("kddkmu")"kddkmu""leelnv""mffmow""nggnpx""ohhoqy""piiprz""qjjqsa""rkkrtb""sllsuc""tmmtvd""unnuwe""voovxf""wppwyg""xqqxzh""yrryai""zsszbj""attack""buubdl""cvvcem""dwwdfn""exxego""fyyfhp""gzzgiq""haahjr""ibbiks""jccjlt"
As you can see, attack is the only word that occurs on this list, so that was the plaintext.
Encrypt the plaintext message cleopatra using the affine function
> affinecrypt("cleopatra", 7, 8)"whkcjilxi"
The ciphertext mzdvezc was encrypted using the affine function Decrypt it.
SOLUTION
First, solve for to obtain We need to find the inverse of
21
(On some computers, the doesn’t work. Instead, type a backslash and then Use the right arrow key to escape from the exponent before typing mod. For some reason, a space is needed before a parenthesis in an exponent.)
Therefore, To change to standard form:
8
Therefore, the decryption function is To decrypt the message:
> affinecrypt("mzdvezc", 21, 8)"anthony"
In case you were wondering, the plaintext was encrypted as follows:
> affinecrypt("anthony", 5, 12)"mzdvezc"
Here is the example of a Vigenère cipher from the text. Let’s see how to produce the data that was used in Section 2.3 to decrypt it. For convenience, we’ve already stored the ciphertext under the name vvhq.
> vvhqvvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljshbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwlwgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfqtwlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivwikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
Find the frequencies of the letters in the ciphertext:
> frequency(vvhq)[ 8, 5, 12, 4, 15, 10, 27, 16, 13, 14, 17, 25, 7, 7, 5, 9, 14, 17,24, 8, 12, 22, 22, 5, 8, 5]
Let’s compute the coincidences for displacements of 1, 2, 3, 4, 5, 6:
> coinc(vvhq,1)14> coinc(vvhq,2)14> coinc(vvhq,3)16> coinc(vvhq,4)14> coinc(vvhq,5)24> coinc(vvhq,6)12
We conclude that the key length is probably 5. Let’s look at the 1st, 6th, 11th, ... letters (namely, the letters in positions congruent to 1 mod 5):
> choose(vvhq, 5, 1)"vvuttcccqgcunjtpjgkuqpknjkygkkgcjfqrkqjrqudukvpkvggjjivgjggpfncwuce"> frequency(%)[0, 0, 7, 1, 1, 2, 9, 0, 1, 8, 8, 0, 0, 3, 0, 4, 5, 2, 0, 3, 6, 5, 1, 0, 1, 0]
To express this as a vector of frequencies:
> vigvec(vvhq, 5, 1)[0., 0., .1044776119, .01492537313, .01492537313,.02985074627, .1343283582, 0., .01492537313, .1194029851,.1194029851, 0., 0., .04477611940, 0., .05970149254,.07462686567, .02985074627, 0., .04477611940, .08955223881,.07462686567, .01492537313, 0., .01492537313, 0.]
The dot products of this vector with the displacements of the alphabet frequency vector are computed as follows:
> corr(%).02501492539, .03910447762, .07132835821, .03882089552,.02749253732, .03801492538, .05120895523, .03014925374,.03247761194, .04302985074, .03377611940, .02985074628,.03426865672, .04456716420, .03555223882, .04022388058,.04343283582, .05017910450, .03917910447, .02958208957,.03262686569, .03917910448, .03655223881, .03161194031,.04883582088, .03494029848
The third entry is the maximum, but sometimes the largest entry is hard to locate. One way to find it is
> max(%).07132835821
Now it is easy to look through the list and find this number (it usually occurs only once). Since it occurs in the third position, the first shift for this Vigenère cipher is by 2, corresponding to the letter A procedure similar to the one just used (using vigvec(vvhq, 5,2),..., vigvec(vvhq,5,5)) shows that the other shifts are probably 14, 3, 4, 18. Let’s check that we have the correct key by decrypting.
> vigenere(vvhq, -[2, 14, 3, 4, 18])themethodusedforthepreparationandreadingofcodemessagesissimpleintheextremeandatthesametimeimpossibleoftranslationunlessthekeyisknowntheeasewithwhichthekeymaybechangedisanotherpointinfavoroftheadoptionofthiscodebythosedesiringtotransmitimportantmessageswithouttheslightestdangeroftheirmessagesbeingreadbypoliticalorbusinessrivalsetc
For the record, the plaintext was originally encrypted by the command
> vigenere(%, [2, 14, 3, 4, 18])vvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljshbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwlwgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfqtwlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivwikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
Find
> gcd(23456, 987654)2
Solve in integers
igcdex(23456, 987654,’x’,’y’)2> x;y-315875
This means that 2 is the gcd and (The command igcdex is for integer gcd extended. Maple also calculates gcd’s for polynomials.) Variable names other than ’x’ and ’y’ can be used if these letters are going to be used elsewhere, for example, in a polynomial. We can also clear the value of x as follows:
> x:=’x’x:=x
Compute
> 234*456 mod 789189
Compute
> 234567&876543 mod 565656565473011223
You might need a before the Use the right arrow to escape from the exponent mode.
Find the multiplicative inverse of
> 87878787&(-1) mod 91919191917079995354
You might need a space before the exponent (The command 1/87878787 mod 9191919191 also works).
Solve
SOLUTION
Here is one way.
> solve(7654*x=2389,x) mod 6553743626
Here is another way.
> 2389/7654 mod 6553743626
The fraction 2389/7654 will appear as a vertically set fraction Use the right arrow key to escape from the fraction mode.
Find with
> chrem([2, 5, 1],[78, 97, 119])647480
We can check the answer:
> 647480 mod 78; 647480 mod 97; 647480 mod 119251
Factor 123450 into primes.
> ifactor(123450)
This means that
Evaluate
> phi(12345)6576
Find a primitive root for the prime 65537.
> primroot(65537)3
Therefore, 3 is a primitive root for 65537.
Find the inverse of the matrix
SOLUTION
First, invert the matrix without the mod, and then reduce the matrix mod 999:
> inverse(matrix(3,3,[13, 12, 35, 41, 53, 62, 71, 68, 10]))
> map(x->x mod 999, %)
This is the inverse matrix mod 999.
Find a square root of 26951623672 mod the prime
SOLUTION
Since we can use the proposition of Section 3.9:
> 26951623672&((98573007539 + 1)/4) mod 9857300753998338017685
(You need two right arrows to escape from the fraction mode and then the exponent mode.) The other square root is minus the preceding one:
> -% mod 98573007539234989854
Let Find all four solutions of
SOLUTION
First, find a square root mod each of the two prime factors, both of which are congruent to
> 19101358&((9803 + 1)/4) mod 98033998> 19101358&((3491 + 1)/4) mod 34911318
Therefore, the square roots are congruent to and are congruent to There are four ways to combine these using the Chinese remainder theorem:
> chrem([3998, 1318],[9803, 3491])43210> chrem([-3998, 1318],[9803, 3491])8397173> chrem([3998, -1318],[9803, 3491])25825100> chrem([-3998, -1318],[9803, 3491])34179063
These are the four desired square roots.
Compute the first 50 terms of the recurrence
The initial values are
SOLUTION
The vector of coefficients is and the initial values are given by the vector Type
> lfsr([1, 0, 1, 0, 0], [0, 1, 0, 0, 0], 50)[0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1]
Suppose the first 20 terms of an LFSR sequence are 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1. Find a recurrence that generates this sequence.
SOLUTION
First, we need to find the length of the recurrence. The command lfsrlength(v, n) calculates the determinants mod 2 of the first matrices that appear in the procedure in Section 5.2:
> lfsrlength([1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1], 10)[1, 1][2, 1][3, 0][4, 1][5, 0][6, 1][7, 0][8, 0][9, 0][10, 0]
The last nonzero determinant is the sixth one, so we guess that the recurrence has length 6. To find the coefficients:
> lfsrsolve([1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1], 6)[1, 0, 1, 1, 1, 0]
This gives the recurrence as
The ciphertext 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0 was produced by adding the output of a LFSR onto the plaintext mod 2 (i.e., XOR the plaintext with the LFSR output). Suppose you know that the plaintext starts 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0. Find the rest of the plaintext.
SOLUTION
XOR the ciphertext with the known part of the plaintext to obtain the beginning of the LFSR output:
> [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0]+ [0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1] mod 2[1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1]
This is the beginning of the LFSR output. Now let’s find the length of the recurrence.
> lfsrlength(%, 8)[1, 1][2, 0][3, 1][4, 0][5, 1][6, 0][7, 0][8, 0]
We guess the length is 5. To find the coefficients of the recurrence:
> lfsrsolve(%%, 5)[1, 1, 0, 0, 1]
Now we can generate the full output of the LFSR using the coefficients we just found plus the first five terms of the LFSR output:
> lfsr([1, 1, 0, 0, 1], [1, 0, 0, 1, 0], 40)[1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0]
When we XOR the LFSR output with the ciphertext, we get back the plaintext:
> % + [0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0] mod 2[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]
This is the plaintext.
The ciphertext
was encrypted using a Hill cipher with matrix
Decrypt it.
SOLUTION
There are several ways to input a matrix. One way is the following. A matrix can be entered as matrix(2,2,[a,b,c,d]). Type evalm(M&*N) to multiply matrices and Type evalm(v&*M) to multiply a vector on the right by a matrix
Here is the encryption matrix.
> M:=matrix(3,3,[1,2,3,4,5,6,7,8,10])
We need to invert the matrix mod 26:
> invM:=map(x->x mod 26, inverse(M))
The command map(x->x mod 26, E) takes each number in an expression E and reduces it mod 26.
This is the inverse of the matrix mod 26. We can check this as follows:
> evalm(M&*invM)
> map(x->x mod 26, %)
To decrypt, we break the ciphertext into blocks of three numbers and multiply each block on the right by the inverse matrix we just calculated:
> map(x->x mod 26, evalm([22,09,00]&*invM))[14, 21, 4]> map(x->x mod 26, evalm([12,03,01]&*invM))[17, 19, 7]> map(x->x mod 26, evalm([10,03,04]&*invM))[4, 7, 8]> map(x->x mod 26, evalm([08,01,17]&*invM))[11, 11, 23]
Therefore, the plaintext is 14, 21, 4, 17, 19, 7, 4, 7, 8, 11, 11, 23. Changing this back to letters, we obtain overthehillx. Note that the final x was appended to the plaintext in order to complete a block of three letters.
Suppose you need to find a large random prime of 50 digits. Here is one way. The function nextprime finds the next prime greater than The function rand(a..b)() gives a random integer between and Combining these, we can find a prime:
> nextprime(rand(..)())73050570031667109175215303340488313456708913284291
If we repeat this procedure, we should get another prime:
> nextprime(rand(..)())97476407694931303255724326040586144145341054568331
Suppose you want to change the text hellohowareyou to numbers:
> text2num("hellohowareyou")805121215081523011805251521
Note that we are now using a = 1, b = 2, ..., z = 26, since otherwise a’s at the beginnings of messages would disappear. (A more efficient procedure would be to work in base 27, so the numerical form of the message would be Note that this uses fewer digits.)
Now suppose you want to change it back to letters:
> num2text(805121215081523011805251521)"hellohowareyou"
Encrypt the message hi using RSA with and
SOLUTION
First, change the message to numbers:
> text2num("hi")809
Now, raise it to the th power mod
> %&17 mod 823091596912
You might need a before the Use the right arrow to escape from the exponent mode.
Decrypt the ciphertext in the previous problem.
SOLUTION
First, we need to find the decryption exponent To do this, we need to find One way is
> phi(823091)821184
Another way is to factor as and then compute
> ifactor(823091)(659)(1249)> 658*1248821184
Since we compute the following (note that we are finding the inverse of mod not mod ):
> 17& (-1) mod 82118448305
Therefore, To decrypt, raise the ciphertext to the th power mod
> 596912&48305 mod 823091809
Finally, change back to letters:
> num2text(809)"hi"
Encrypt hellohowareyou using RSA with and
SOLUTION
First, change the plaintext to numbers:
> text2num("hellohowareyou")805121215081523011805251521
Suppose we simply raised this to the th power mod
> %&17 mod 823091447613
If we decrypt (we know from Example 25), we obtain
> %&48305 mod 823091628883
This is not the original plaintext. The reason is that the plaintext is larger than so we have obtained the plaintext mod
> 805121215081523011805251521 mod 823091628883
We need to break the plaintext into blocks, each less than In our case, we use three letters at a time:
> 80512&17 mod 823091757396> 121508&17 mod 823091164513> 152301&17 mod 823091121217> 180525&17 mod 823091594220> 1521&17 mod 823091442163
The ciphertext is therefore 757396164513121217594220442163. Note that there is no reason to change this back to letters. In fact, it doesn’t correspond to any text with letters.
Decrypt each block individually:
> 757396&48305 mod 82309180512> 164513&48305 mod 823091121508
etc.
We’ll now do some examples with large numbers, namely the numbers in the RSA Challenge discussed in Section 9.5. These are stored under the names rsan, rsae, rsap, rsaq:
> rsan114381625757888867669235779976146612010218296721242362562561842935706935245733897830597123563958705058989075147599290026879543541> rsae9007
Encrypt each of the messages b, ba, bar, bard using rsan and rsae.
> text2num("b")&rsae mod rsan70946758467612668598370164991550786182876331060685235410564704114486782261716497200122155332348462014053287987580899263765142534> text2num("ba")&rsae mod rsan35045130608975100325011709449871954273788204753948593060313697698227621759806027962270538031565564773352033671782261305796158951> text2num("bar")&rsae mod rsan44814512863855101076004530859492109342429531606607409070360543408000843645986880405953102818312822586362580298784441151922606424> text2num("bard")&rsae mod rsan24238077785111666423202862512090317393485212959056270783134991614256054323297179804928958073445752663026449873986877989329909498
Observe that the ciphertexts are all the same length. There seems to be no easy way to determine the length of the corresponding plaintext.
Using the factorization find the decryption exponent for the RSA Challenge, and decrypt the ciphertext (see Section 9.5).
First we find the decryption exponent:
> rsad:=rsae&(-1) mod (rsap-1)*(rsaq-1):
Note that we use the final colon to avoid printing out the value. If you want to see the value of rsad, see Section 9.5, or don’t use the colon. To decrypt the ciphertext, which is stored as rsaci, and change to letters:
> num2text(rsaci&rsad mod rsan)"the magic words are squeamish ossifrage"
Encrypt the message rsaencryptsmessageswell using rsan and rsae.
> text2num("rsaencryptsmessageswell")&rsae mod rsan94639420349002259316305823539249496414640969934001709721404352418271950654254365584906013966328817753539283112653197553130781884
Decrypt the preceding ciphertext.
SOLUTION
Fortunately, we know the decryption exponent rsad. Therefore, we compute
> %& rsad mod rsan1819010514031825162019130519190107051923051212> num2text(%)"rsaencryptsmessageswell"
Suppose we lose the final digit 4 of the ciphertext in transmission. Let’s try to decrypt what’s left (subtracting 4 and dividing by 10 is a mathematical way to remove the 4):
> (%%% - 4)/10)&rsad mod rsan47952999173195988664902352629525486409113633894375629846854907970588412300373487969657794254117158956921267912628461494475682806
If we try to change this to letters, we do not get anything resembling the message. A small error in the plaintext completely changes the decrypted message and usually produces garbage.
Suppose we are told that is the product of two primes and that Factor
SOLUTION
We know (see Section 9.1) that and are the roots of Therefore, we compute
> solve(2 -(11313771275590312567 - 11313771187608744400 + 1)*x +11313771275590312567, x)87852787151, 128781017
Therefore, We also could have used the quadratic formula to find the roots.
Suppose we know rsae and rsad. Use these to factor rsan.
SOLUTION
We use the factorization method from Section 9.4. First write with odd. One way to do this is first to compute and then keep dividing by 2 until we get an odd number:
> rsae*rsad - 1961034419617782266156919023359583834109854129051878330250644604041155985575087352659156174898557342995131594680431086921245830097664> %/2480517209808891133078459511679791917054927064525939165125322302020577992787543676329578087449278671497565797340215543460622915048832> %/2240258604904445566539229755839895958527463532262969582562661151010288996393771838164789043724639335748782898670107771730311457524416
We continue this way for six more steps until we get
3754040701631961977175464934998374351991617691608899727541580484535765568652684971324828808197489621074732791720433933286116523819
This number is Now choose a random integer Hoping to be lucky, we choose 13. As in the factorization method, we compute
> 13&% mod rsan27574368507006530592243494868847161198423095707307805690569839647030183109839862370800529338092984795490192643587960859870551239
Since this is not we successively square it until we get
> %&2 mod rsan48318960321928515580138476418723034554104099069940846225494702776654996412582955636035266156108686431194298574075854037512277292> %&2 mod rsan78172814154877356579141928058754000021948787056483820917930625115215181839742056013275521913487560944732073516487722273875579363> %&2 mod rsan42836191202508728742199299040582900202976222916017767167551870216509444518239462186379470569442055101392992293082259601738228702> %&2 mod rsan1
Since the last number before the 1 was not we have an example of with Therefore, is a nontrivial factor of rsan:
> gcd(%% - 1, rsan)32769132993266709549961988190834461413177642967992942539798288533
This is rsaq. The other factor is obtained by computing rsan/rsaq:
rsan/%3490529510847650949147849619903898133417764638493387843990820577
This is rsap.
Suppose you know that
Factor 205611444308117.
SOLUTION
We use the Basic Principle of Section 9.4:
> gcd(150883475569451-16887570532858,205611444308117)23495881
This gives one factor. The other is
> 205611444308117/%8750957
We can check that these factors are actually primes, so we can’t factor any further:
> isprime(%%)true> isprime(%%)true
Factor by the method.
SOLUTION
Let’s choose our bound as and let’s take so we compute
> 2&factorial(100)mod 376875575426394855599989992897873239369676678301956331939422106251199512
Then we compute the gcd of and
> gcd(% - 1, 376875575426394855599989992897873239)430553161739796481
This is a factor The other factor is
> 376875575426394855599989992897873239/%875328783798732119
Let’s see why this worked. The factorizations of and are
> ifactor(430553161739796481 - 1)(2)(3)(5)(7)(11)(47)> ifactor(875328783798732119 - 1)(2)(61)(8967967)(20357)(39301)
We see that is a multiple of so However, is not a multiple of so it is likely that Therefore, both and have as a factor, but only has as a factor. It follows that the gcd is
Let’s solve the discrete log problem by the Baby Step-Giant Step method of Subsection 10.2.2. We take since and we form two lists. The first is for
>for j from 0 while j <= 11 do; (j, 2&ĵ mod 131); end do;0, 11, 22, 43, 84, 165, 326, 647, 1288, 1259, 11910, 10711, 83
The second is for
> for j from 0 while j <= 11 do; (j, 71*2&\^\: (-12*j) mod 131); end do;0, 711, 172, 1243, 264, 1285, 866, 1117, 938, 859, 9610, 13011, 116
The number 128 is on both lists, so we see that Therefore,
Suppose there are 23 people in a room. What is the probability that at least two have the same birthday?
SOLUTION
The probability that no two have the same birthday is (note that the product stops at not ). Subtracting from 1 gives the probability that at least two have the same birthday:
> 1-mul(1.-i/365, i=1..22).5072972344
Note that we used in the product instead of 1 without the decimal point. If we had omitted the decimal point, the product would have been evaluated as a rational number (try it, you’ll see).
Suppose a lazy phone company employee assigns telephone numbers by choosing random seven-digit numbers. In a town with 10,000 phones, what is the probability that two people receive the same number?
> 1-mul(1.-i/7, i=1..9999).9932699133
Note that the number of phones is about three times the square root of the number of possibilities. This means that we expect the probability to be high, which it is. From Section 12.1, we have the estimate that if there are around phones, there should be a 50% chance of a match. Let’s see how accurate this is:
> 1-mul(1.-i/7, i=1..3722).4998945441
Suppose we have a (5, 8) Shamir secret sharing scheme. Everything is mod the prime Five of the shares are
Find the secret.
SOLUTION
One way: First, find the Lagrange interpolating polynomial through the five points:
> interp([9853,4421,6543,93293,12398],[853,4387,1234,78428,7563],x)
Now evaluate at to find the constant term:
> eval(%,x=0)
We need to change this to an integer mod 987541:
> % mod 987541678987
Therefore, 678987 is the secret.
Here is another way. Set up the matrix equations as in the text and then solve for the coefficients of the polynomial mod 987541:
> map(x->x mod 987541,evalm(inverse(matrix(5,5,[1,9853,2,3,4,1,4421,2,3,4,1,6543,2,3, 4,1, 93293, 2,3, 4,1, 12398, 2,3,4]))&*matrix(5,1,[853,4387,1234,78428,7563])))
The constant term is which is the secret.
Here is a game you can play. It is essentially the simplified version of poker over the telephone from Section 18.2. There are five cards: ten, jack, queen, king, ace. They are shuffled and disguised by raising their numbers to a random exponent mod the prime 24691313099. You are supposed to guess which one is the ace.
To start, pick a random exponent. We use the colon after khide() so that we cannot cheat and see what value of is being used.
> k:= khide():
Now, shuffle the disguised cards (their numbers are raised to the th power mod and then randomly permuted):
> shuffle(k)[14001090567, 16098641856, 23340023892, 20919427041, 7768690848]
These are the five cards. None looks like the ace that’s because their numbers have been raised to powers mod the prime. Make a guess anyway. Let’s see if you’re correct.
> reveal(%)["ten", "ace", "queen", "jack", "king"]
Let’s play again:
> k:= khide():> shuffle(k)[13015921305, 14788966861, 23855418969, 22566749952, 8361552666]
Make your guess (note that the numbers are different because a different random exponent was used). Were you lucky?
> reveal(%)["ten", "queen", "ace", "king", "jack"]
Perhaps you need some help. Let’s play one more time:
> k:= khide():> shuffle(k)[13471751030, 20108480083, 8636729758, 14735216549, 11884022059]
We now ask for advice:
> advise(%)3
We are advised that the third card is the ace. Let’s see (recall that %% is used to refer to the next to last output):
> reveal(%%)["jack", "ten", "ace", "queen", "king"]
How does this work? Read the part on “How to Cheat” in Section 18.2. Note that if we raise the numbers for the cards to the power mod we get
> map(x->x&((24691313099-1)/2) mod 24691313099,[200514, 10010311, 1721050514, 11091407, 10305])[1, 1, 1, 1, 24691313098]
Therefore, only the ace is a quadratic nonresidue mod
All of the elliptic curves we work with in this chapter are elliptic curves mod However, it is helpful use the graphs of elliptic curves with real numbers in order to visualize what is happening with the addition law, for example, even though such pictures do not exist mod
Let’s graph the elliptic curve We’ll specify that and and make sure that x and y are cleared of previous values.
> x:=’x’;y:=’y’;implicitplot(2=x*(x-1)*(x+1), x=-1..3,y=-5..5)
Add the points (1, 3) and (3, 5) on the elliptic curve
> addell([1,3], [3,5], 24, 13, 29)[26,1]
You can check that the point (26, 1) is on the curve:
Add (1, 3) to the point at infinity on the curve of the previous example.
> addell([1,3], ["infinity","infinity" ], 24, 13, 29)[1,3]
As expected, adding the point at infinity to a point returns the point
Let be a point on the elliptic curve Find
> multell([1,3], 7, 24, 13, 29)[15,6]
Find for on the curve of the previous example.
> multsell([1,3], 40, 24, 13, 29)[[1,[1,3]],[2,[11,10]],[3,[23,28]],[4,[0,10]],[5,[19,7]],[6,[18,19]],[7,[15,6]],[8,[20,24]],[9,[4,12]],[10,[4,17]],[11,[20,5]],[12,[15,23]],[13,[18,10]],[14,[19,22]],[15,[0,19]],[16,[23,1]],[17,[11,19]],[18,[1,26]],[19,["infinity","infinity"]],[20,[1,3]],[21,[11,10]],[22,[23,28]],[23,[0,10]],[24,[19,7]],[25,[18,19]],[26,[15,6]],[27,[20,24]],[28,[4,12]],[29,[4,17]],[30,[20,5]],[31,[15,23]],[32,[18,10]],[33,[19,22]],[34,[0,19]],[35,[23,1]],[36,[11,19]],[37,[1,26]],[38,["infinity","infinity"]],[39,[1,3]],[40,[11,10]]]
Notice how the points repeat after every 19 multiples.
The previous four examples worked mod the prime 29. If we work mod a composite number, the situation at infinity becomes more complicated since we could be at infinity mod both factors or we could be at infinity mod one of the factors but not mod the other. Therefore, we stop the calculation if this last situation happens and we exhibit a factor. For example, let’s try to compute where is on the elliptic curve
> multell([1,3], 12, -5, 13, 11*19)["factor=",19]
Now let’s compute the successive multiples to see what happened along the way:
> multsell([1,3], 12, -5, 13, 11*19)[[1,[1,3]],[2,[91,27]],[3,[118,133]],[4,[148,182]],[5,[20,35]],[6,["factor=",19]]]
When we computed we ended up at infinity mod 19. Let’s see what is happening mod the two prime factors of 209, namely 19 and 11:
> multsell([1,3], 12, -5, 13, 19)[[1,[1,3]],[2,[15,8]],[3,[4,0]],[4,[15,11]],[5,[1,16]],[6,["infinity","infinity"]],[7,[1,3]],[8,[15,8]],[9,[4,0]],[10,[15,11]],[11,[1,16]],[12,["infinity","infinity"]]]> multsell([1,3], 24, -5, 13, 11)[[1,[1,3]],[2,[3,5]],[3,[8,1]],[4,[5,6]],[5,[9,2]],[6,[6,10]],[7,[2,0]],[8,[6,1]],[9,[9,9]],[10,[5,5]],[11,[8,10]],[12,[3,6]],[13,[1,8]],[14,["infinity","infinity"]],[15,[1,3]],[16,[3,5]],[17,[8,1]],[18,[5, 6]],[19,[9, 2]],[20,[6,10]],[21,[2,0]],[22,[6,1]],[23,[9,9]],[24,[5,5]]]
After six steps, we were at infinity mod 19, but it takes 14 steps to reach infinity mod 11. To find we needed to invert a number that was 0 mod 19 and nonzero mod 11. This couldn’t be done, but it yielded the factor 19. This is the basis of the elliptic curve factorization method.
Factor 193279 using elliptic curves.
SOLUTION
First, we need to choose some random elliptic curves and a point on each curve. For example, let’s take and the elliptic curve
For to lie on the curve, we take We’ll also take
Now we compute multiples of the point We do the analog of the method, so we choose a bound say and compute
> multell([2,4], factorial(12), -10, 28, 193279)["factor=",347]> multell([1,1], factorial(12), 11, -11, 193279)[13862,35249]> multell([1,2], factorial(12), 17, -14, 193279)["factor=",557]
Let’s analyze in more detail what happened in these examples.
On the first curve, ends up at infinity mod 557 and is infinity mod 347. Since it has a prime factor larger than so is not infinity mod 557. But 35 divides so is infinity mod 347.
On the second curve, mod 347 and mod 557. Since and we don’t expect to find the factorization with this curve.
The third curve is a surprise. We have mod 347 and mod 557. Since is prime and we don’t expect to find the factorization with this curve. However, by chance, an intermediate step in the calculation of yielded the factorization. Here’s what happened. At one step, the program required adding the points (184993, 13462) and (20678, 150484). These two points are congruent mod 557 but not mod 347. Therefore, the slope of the line through these two points is defined mod 347 but is 0/0 mod 557. When we tried to find the multiplicative inverse of the denominator mod 193279, the gcd algorithm yielded the factor 557. This phenomenon is fairly rare.
Here is how to produce the example of an elliptic curve ElGamal cryptosystem from Section 21.5. For more details, see the text. The elliptic curve is and the point is Alice’s message is the point
Bob has chosen his secret random number and has computed
> multell([4,11], 3, 3, 45, 8831)[413,1808]
Bob publishes this point. Alice chooses the random number and computes and
> multell([4,11], 8, 3, 45, 8831)[5415,6321]> addell([5,1743],multell([413,1808],8,3,45,8831),3,45,8831)[6626,3576]
Alice sends (5415,6321) and (6626,3576) to Bob, who multiplies the first of these point by
> multell([5415,6321], 3, 3, 45, 8831)[673,146]
Bob then subtracts the result from the last point Alice sends him. Note that he subtracts by adding the point with the second coordinate negated:
> addell([6626,3576], [673,-146], 3, 45, 8831)[5,1743]
Bob has therefore received Alice’s message.
Let’s reproduce the numbers in the example of a Diffie-Hellman key exchange from Section 21.5: The elliptic curve is and the point is Alice chooses her secret and Bob chooses his secret Alice calculates
> multell([3,5], 12, 1, 7206, 7211)[1794,6375]
She sends (1794,6375) to Bob. Meanwhile, Bob calculates
> multell([3,5], 23, 1, 7206, 7211)[3861, 1242]
and sends (3861,1242) to Alice. Alice multiplies what she receives by and Bob multiplies what he receives by
> multell([3861,1242], 12, 1, 7206, 7211)[1472,2098]> multell([1794,6375], 23, 1, 7206, 7211)[1472,2098]
Therefore, Alice and Bob have produced the same key.
These computer examples are written for MATLAB. If you have MATLAB available, you should try some of them on your computer. For information on getting started with MATLAB, see Section C.1. Several functions have been written to allow for experimentation with MATLAB. The MATLAB functions associated with this book are available at
bit.ly/2HyvR8n
We recommend that you create a directory or folder to store these files and download them to that directory or folder. One method for using these functions is to launch MATLAB from the directory where the files are stored, or launch MATLAB and change the current directory to where the files are stored. In some versions of MATLAB the working directory can be changed by changing the current directory on the command bar. Alternatively, one can add the path to that directory in the MATLAB path by using the path function or the Set Path option from the File menu on the command bar.
If MATLAB is not available, it is still possible to read the examples. They provide examples for several of the concepts presented in the book. Most of the examples used in the MATLAB appendix are similar to the examples in the Mathematica and Maple appendices. MATLAB, however, is limited in the size of the numbers it can handle. The maximum number that MATLAB can represent in its default mode is roughly 16 digits and larger numbers are approximated. Therefore, it is necessary to use the symbolic mode in MATLAB for some of the examples used in this book.
A final note before we begin. It may be useful when doing the MATLAB exercises to change the formatting of your display. The command
>> format rat
sets the formatting to represent numbers using a fractional representation. The conventional short format represents large numbers in scientific notation, which often doesn’t display some of the least significant digits. However, in both formats, the calculations, when not in symbolic mode, are done in floating point decimals, and then the rational format changes the answers to rational numbers approximating these decimals.
MATLAB is a programming language for performing technical computations. It is a powerful language that has become very popular and is rapidly becoming a standard instructional language for courses in mathematics, science, and engineering. MATLAB is available on most campuses, and many universities have site licenses allowing MATLAB to be installed on any machine on campus.
In order to launch MATLAB on a PC, double click on the MATLAB icon. If you want to run MATLAB on a Unix system, type matlab at the prompt. Upon launching MATLAB, you will see the MATLAB prompt:
>>
which indicates that MATLAB is waiting for a command for you to type in. When you wish to quit MATLAB, type quit at the command prompt.
MATLAB is able to do the basic arithmetic operations such as addition, subtraction, multiplication, and division. These can be accomplished by the operators +, -, *, and /, respectively. In order to raise a number to a power, we use the operator ^ . Let us look at an example:
If we type at the prompt and press the Enter key
>> 2^7 + 125/5
then MATLAB will return the answer:
ans =153
Notice that in this example, MATLAB performed the exponentiation first, the division next, and then added the two results. The order of operations used in MATLAB is the one that we have grown up using. We can also use parentheses to change the order in which MATLAB calculates its quantities. The following example exhibits this:
>> 11*( (128/(9+7) - 2^(72/12)))ans =-616
In these examples, MATLAB has called the result of the calculations ans, which is a variable that is used by MATLAB to store the output of a computation. It is possible to assign the result of a computation to a specific variable. For example,
>> spot=17spot =17
assigns the value of 17 to the variable spot. It is possible to use variables in computations:
>> dog=11dog =11>> cat=7cat =7>> animals=dog+catanimals =18
MATLAB also operates like an advanced scientific calculator since it has many functions available to it. For example, we can do the standard operation of taking a square root by using the sqrt function, as in the following example:
>> sqrt(1024)ans =32
There are many other functions available. Some functions that will be useful for this book are mod, factorial, factor, prod, and size.
Help is available in MATLAB. You may either type help at the prompt, or pull down the Help menu. MATLAB also provides help from the command line by typing help commandname. For example, to get help on the function mod, which we shall be using a lot, type the following:
>> help mod
MATLAB has a collection of toolboxes available. The toolboxes consist of collections of functions that implement many application-specific tasks. For example, the Optimization toolbox provides a collection of functions that do linear and nonlinear optimization. Generally, not all toolboxes are available. However, for our purposes, this is not a problem since we will only need general MATLAB functions and have built our own functions to explore the number theory behind cryptography.
The basic data type used in MATLAB is the matrix. The MATLAB programming language has been written to use matrices and vectors as the most fundamental data type. This is natural since many mathematical and scientific problems lend themselves to using matrices and vectors.
Let us start by giving an example of how one enters a matrix in MATLAB. Suppose we wish to enter the matrix
into MATLAB. To do this we type:
>> A = [1 1 1 1; 1 2 4 8; 1 3 9 27; 1 4 16 64]
at the prompt. MATLAB returns
A =1 1 1 11 2 4 81 3 9 271 4 16 64
There are a few basic rules that are used when entering matrices or vectors. First, a vector or matrix is started by using a square bracket [ and ended using a square bracket ]. Next, blanks or commas separate the elements of a row. A semicolon is used to end each row. Finally, we may place a semicolon at the very end to prevent MATLAB from displaying the output of the command.
To define a row vector, use blanks or commas. For example,
>> x = [2, 4, 6, 8, 10, 12]x =2 4 6 8 10 12
To define a column vector, use semicolons. For example,
>> y=[1;3;5;7]y =1357
In order to access a particular element of y, put the desired index in parentheses. For example, y(1) = 1, y(2) = 3, and so on.
MATLAB provides a useful notation for addressing multiple elements at the same time. For example, to access the third, fourth, and fifth elements of x, we would type
>> x(3:5)ans =6 8 10
The 3:5 tells MATLAB to start at 3 and count up to 5. To access every second element of x, you can do this by
>> x(1:2:6)ans =2 6 10
We may do this for the array also. For example,
>> A(1:2:4,2:2:4)ans =1 13 27
The notation 1:n may also be used to assign to a variable. For example,
>> x=1:7
returns
x =1 2 3 4 5 6 7
MATLAB provides the size function to determine the dimensions of a vector or matrix variable. For example, if we want the dimensions of the matrix that we entered earlier, then we would do
>> size(A)ans =4 4
It is often necessary to display numbers in different formats. MATLAB provides several output formats for displaying the result of a computation. To find a list of formats available, type
>> help format
The short format is the default format and is very convenient for doing many computations. However, in this book, we will be representing long whole numbers, and the short format will cut off some of the trailing digits in a number. For example,
>> a=1234567899a =1.2346e+009
Instead of using the short format, we shall use the rational format. To switch MATLAB to using the rational format, type
>> format rat
As an example, if we do the same example as before, we now get different results:
>> a=1234567899a =1234567899
This format is also useful because it allows us to represent fractions in their fractional form, for example,
>> 111/323ans =111/323
In many situations, it will be convenient to suppress the results of a computation. In order to have MATLAB suppress printing out the results of a command, a semicolon must follow the command. Also, multiple commands may be entered on the same line by separating them with commas. For example,
>> dogs=11, cats=7; elephants=3, zebras=19;dogs =11elephants =3
returns the values for the variables dogs and elephants but does not display the values for cats and zebras.
MATLAB can also handle variables that are made of text. A string is treated as an array of characters. To assign a string to a variable, enclose the text with single quotes. For example,
>> txt=’How are you today?’
returns
txt =How are you today?
A string has size much like a vector does. For example, the size of the variable txt is given by
>> size(txt)ans =1 18
It is possible to edit the characters one by one. For example, the following command changes the first word of txt:
>> txt(1)=’W’; txt(2)=’h’;txt(3)=’o’txt =Who are you today?
As you work in MATLAB, it will remember the commands you have entered as well as the values of the variables you have created. To scroll through your previous commands, press the up-arrow and down-arrow. In order to see the variables you have created, type who at the prompt. A similar command whos gives the variables, their size, and their type information.
Notes. 1. To use the commands that have been written for the examples, you should run MATLAB in the directory into which you have downloaded the file from the Web site bit.ly/2HyvR8n
2. Some of the examples and computer problems use long ciphertexts, etc. For convenience, these have been stored in the file ciphertexts.m, which can be loaded by typing ciphertexts at the prompt. The ciphertexts can then be referred to by their names. For example, see Computer Example 4 for Chapter 2.
A shift cipher was used to obtain the ciphertext kddkmu.
Decrypt it by trying all possibilities.
>> allshift(’kddkmu’)kddkmuleelnvmffmownggnpxohhoqypiiprzqjjqsarkkrtbsllsuctmmtvdunnuwevoovxfwppwygxqqxzhyrryaizsszbjattackbuubdlcvvcemdwwdfnexxegofyyfhpgzzgiqhaahjribbiksjccjlt
As you can see, attack is the only word that occurs on this list, so that was the plaintext.
Encrypt the plaintext message cleopatra using the affine function :
>> affinecrypt(’cleopatra’,7,8)ans =’whkcjilxi’
The ciphertext mzdvezc was encrypted using the affine function . Decrypt it.
SOLUTION
First, solve for to obtain . We need to find the inverse of :
>> powermod(5,-1,26)ans =21
Therefore, . To change to standard form:
>> mod(-12*21,26)ans =8
Therefore, the decryption function is . To decrypt the message:
>> affinecrypt(’mzdvezc’,21,8)ans =’anthony’
In case you were wondering, the plaintext was encrypted as follows:
>> affinecrypt(’anthony’,5,12)ans =’mzdvezc’
Here is the example of a Vigenère cipher from the text. Let’s see how to produce the data that was used in Section 2.3 to decrypt the ciphertext. In the file ciphertexts.m, the ciphertext is stored under the name vvhq. If you haven’t already done so, load the file ciphertexts.m:
>> ciphertexts
Now we can use the variable vvhq to obtain the ciphertext:
>> vvhqvvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljs hbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwl wgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfq twlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivw ikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
We now find the frequencies of the letters in the ciphertext. We use the function frequency. The frequency command was written to display automatically the letter and the count next to it. We therefore have put a semicolon at the end of the command to prevent MATLAB from displaying the count twice.
>> fr=frequency(vvhq);a 8b 5c 12d 4e 15f 10g 27h 16i 13j 14k 17l 25m 7n 7o 5p 9q 14r 17s 24t 8u 12v 22w 22x 5y 8z 5
Let’s compute the coincidences for displacements of 1, 2, 3, 4, 5, 6:
>> coinc(vvhq,1)ans =14>> coinc(vvhq,2)ans =14>> coinc(vvhq,3)ans =16>> coinc(vvhq,4)ans =14>> coinc(vvhq,5)ans =24>> coinc(vvhq,6)ans =12
We conclude that the key length is probably 5. Let’s look at the 1st, 6th, 11th, ... letters (namely, the letters in positions congruent to 1 mod 5). The function choose will do this for us. The function choose(txt,m,n) extracts every letter from the string txt that has positions congruent to n mod m.
>> choose(vvhq,5,1)ans =vvuttcccqgcunjtpjgkuqpknjkygkkgcjfqrkqjrqudukvpkvggjjivgjgg pfncwuce
We now do a frequency count of the preceding substring. To do this, we use the frequency function and use ans as input. In MATLAB, if a command is issued without declaring a variable for the result, MATLAB will put the output in the variable ans.
>> frequency(ans);a 0b 0c 7d 1e 1f 2g 9h 0i 1j 8k 8l 0m 0n 3o 0p 4q 5r 2s 0t 3u 6v 5w 1x 0y 1z 0
To express this as a vector of frequencies, we use the vigvec function. The vigvec function will not only display the frequency counts just shown, but will return a vector that contains the frequencies. In the following output, we have suppressed the table of frequency counts since they appear above and have reported the results in the short format.
>> vigvec(vvhq,5,1)ans =000.10450.01490.01490.02990.134300.01490.11940.1194000.044800.05970.07460.029900.04480.08960.07460.014900.01490
(If we are working in rational format, these numbers are displayed as rationals.) The dot products of this vector with the displacements of the alphabet frequency vector are computed as follows:
>> corr(ans)ans =0.02500.03910.07130.03880.02750.03800.05120.03010.03250.04300.03380.02990.03430.04460.03560.04020.04340.05020.03920.02960.03260.03920.03660.03160.04880.0349
The third entry is the maximum, but sometimes the largest entry is hard to locate. One way to find it is
>> max(ans)ans =0.0713
Now it is easy to look through the list and find this number (it usually occurs only once). Since it occurs in the third position, the first shift for this Vigenère cipher is by 2, corresponding to the letter c. A procedure similar to the one just used (using vigvec(vvhq, 5,2), . . . , vigvec(vvhq,5,5)) shows that the other shifts are probably 14, 3, 4, 18. Let’s check that we have the correct key by decrypting.
>> vigenere(vvhq,-[2,14,3,4,18])ans =themethodusedforthepreparationandreadingofcodemessagesissimpleinthe extremeandatthesametimeimpossibleoftranslationunlessthekeyisknownth eeasewithwhichthekeymaybechangedisanotherpointinfavoroftheadoptiono fthiscodebythosedesiringtotransmitimportantmessageswithouttheslight estdangeroftheirmessagesbeingreadbypoliticalorbusinessrivalsetc
For the record, the plaintext was originally encrypted by the command
>> vigenere(ans,[2,14,3,4,18])ans =vvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfsvgahwkcuhwauglqhnslrljs hbltspisprdxljsveeghlqwkasskuwepwqtwvspgoelkcqyfnsvwljsniqkgnrgybwl wgoviokhkazkqkxzgyhcecmeiujoqkwfwvefqhkijrclrlkbienqfrjljsdhgrhlsfq twlauqrhwdmwlgusgikkflryvcwvspgpmlkassjvoqxeggveyggzmljcxxljsvpaivw ikvrdrygfrjljslveggveyggeiapuuisfpbtgnwwmuczrvtwglrwugumnczvile
Find
>> gcd(23456,987654)ans =2
If larger integers are used, they should be expressed in symbolic mode; otherwise, only the first 16 digits of the entries are used accurately. The present calculation could have been done as
>> gcd(sym(’23456’),sym(’987654’))ans =2
Solve in integers .
>> [a,b,c]=gcd(23456,987654)a =2b =-3158c =75
This means that 2 is the gcd and .
Compute .
>> mod(234*456,789)ans =189
Compute .
>> powermod(sym(’234567’),sym(’876543’),sym(’565656565’))ans =5334
Find the multiplicative inverse of .
>> invmodn(sym(’87878787’),sym(’9191919191’))ans =7079995354
Solve .
SOLUTION
To solve this problem, we follow the method described in Section 3.3. We calculate and then multiply it by 2389:
>> invmodn(7654,65537)ans =54637>> mod(ans*2389,65537)ans =43626
Find with
SOLUTION
To solve the problem we use the function crt.
>> crt([2 5 1],[78 97 119])ans =647480
We can check the answer:
>> mod(647480,[78 97 119])ans =2 5 1
Factor 123450 into primes.
>> factor(123450)ans =2 3 5 5 823
This means that .
Evaluate .
>> eulerphi(12345)ans =6576
Find a primitive root for the prime 65537.
>> primitiveroot(65537)ans =3
Therefore, 3 is a primitive root for 65537.
Find the inverse of the matrix .
SOLUTION
First, we enter the matrix as .
>> M=[13 12 35; 41 53 62; 71 68 10];
Next, invert the matrix without the mod:
>> Minv=inv(M)Minv =233/2158 -539/8142 103/3165-270/2309 139/2015 -40/2171209/7318 32/34139 -197/34139
We need to multiply by the determinant of in order to clear the fractions out of the numbers in . Then we need to multiply by the inverse of the determinant mod 999.
>> Mdet=det(M)Mdet =-34139>> invmodn(Mdet,999)ans =589
The answer is given by
>> mod(Minv*589*Mdet,999)ans =772 472 965641 516 851150 133 149
Therefore, the inverse matrix mod 999 is .
In many cases, it is possible to determine by inspection the common denominator that must be removed. When this is not the case, note that the determinant of the original matrix will always work as a common denominator.
Find a square root of 26951623672 mod the prime .
SOLUTION
Since , we can use the proposition of Section 3.9:
>> powermod(sym(’26951623672’),(sym(’98573007539’)+1)/4,sym(’98573007539’))ans =98338017685
The other square root is minus this one:
>> mod(-ans,32579)ans =234989854
Let . Find all four solutions of .
SOLUTION
First, find a square root mod each of the two prime factors, both of which are congruent to :
>> powermod(19101358,(9803+1)/4,9803)ans =3998>> powermod(19101358,(3491+1)/4,3491)ans =1318
Therefore, the square roots are congruent to and are congruent to . There are four ways to combine these using the Chinese remainder theorem:
>> crt([3998 1318],[9803 3491])ans =43210>> crt([-3998 1318],[9803 3491])ans =8397173>> crt([3998 -1318],[9803 3491])ans =25825100>> crt([-3998 -1318],[9803 3491])ans =34179063
These are the four desired square roots.
Compute the first 50 terms of the recurrence
The initial values are .
SOLUTION
The vector of coefficients is and the initial values are given by the vector . Type
>> lfsr([1 0 1 0 0],[0 1 0 0 0],50)ans =Columns 1 through 120 1 0 0 0 0 1 0 0 1 0 1Columns 13 through 241 0 0 1 1 1 1 1 0 0 0 1Columns 25 through 361 0 1 1 1 0 1 0 1 0 0 0Columns 37 through 480 1 0 0 1 0 1 1 0 0 1 1Columns 49 through 501 1
Suppose the first 20 terms of an LFSR sequence are 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1. Find a recursion that generates this sequence.
SOLUTION
First, we find a candidate for the length of the recurrence. The command lfsrlength(v, n) calculates the determinants mod 2 of the first matrices that appear in the procedure described in Section 5.2 for the sequence . Recall that the last nonzero determinant gives the length of the recurrence.
>> lfsrlength([1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 1 0 1],10)Order Determinant1 12 13 04 15 06 17 08 09 010 0
The last nonzero determinant is the sixth one, so we guess that the recurrence has length 6. To find the coefficients:
>> lfsrsolve([1 0 1 0 1 1 1 0 0 0 0 1 1 1 0 1 0 1 0 1],6)ans =1 0 1 1 1 0
This gives the recurrence as
The ciphertext 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0 was produced by adding the output of a LFSR onto the plaintext mod 2 (i.e., XOR the plaintext with the LFSR output). Suppose you know that the plaintext starts 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0. Find the rest of the plaintext.
SOLUTION
XOR the ciphertext with the known part of the plaintext to obtain the beginning of the LFSR output:
>> x=mod([1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0]+[0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 0 1],2)x =Columns 1 through 121 0 0 1 0 1 1 0 1 0 0 1Columns 13 through 170 1 1 0 1
This is the beginning of the LFSR output. Let’s find the length of the recurrence:
>> lfsrlength(x,8)Order Determinant1 12 03 14 05 16 07 08 0
We guess the length is 5. To find the coefficients of the recurrence:
>> lfsrsolve(x,5)ans =1 1 0 0 1
Now we can generate the full output of the LFSR using the coefficients we just found plus the first five terms of the LFSR output:
>> lfsr([1 1 0 0 1],[1 0 0 1 0],40)ans =Columns 1 through 121 0 0 1 0 1 1 0 1 0 0 1Columns 13 through 240 1 1 0 1 0 0 1 0 1 1 0Columns 25 through 361 0 0 1 0 1 1 0 1 0 0 1Columns 37 through 400 1 1 0
When we XOR the LFSR output with the ciphertext, we get back the plaintext:
>> mod(ans+[0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 0],2)ans =Columns 1 through 121 1 1 1 1 1 0 0 0 0 0 0Columns 13 through 241 1 1 0 0 0 1 1 1 1 0 0Columns 25 through 360 0 1 1 1 1 1 1 1 0 0 0Columns 37 through 400 0 0 0
This is the plaintext.
The ciphertext
was encrypted using a Hill cipher with matrix
Decrypt it.
SOLUTION
A matrix is entered as . Type to multiply matrices and . Type to multiply a vector on the right by a matrix .
First, we put the above matrix in the variable .
>> M=[1 2 3; 4 5 6; 7 8 10]M =
Next, we need to invert the matrix mod 26:
>> Minv=inv(M)Minv =
Since we are working mod 26, we can’t stop with numbers like . We need to get rid of the denominators and reduce mod 26. To do so, we multiply by 3 to extract the numerators of the fractions, then multiply by the inverse of 3 mod 26 to put the “denominators” back in (see Section 3.3):
>> M1=Minv*3M1 =
>> M2=mod(round(M1*9,26))M2 =
Note that we used the function round in calculating . This was done since MATLAB performs its calculations in floating point and calculating the inverse matrix produces numbers that are slightly different from whole numbers. For example, consider the following:
>> a=1.99999999;display([a, mod(a,2), mod(round(a),2)])
2.0000 2.0000 0
The matrix is the inverse of the matrix mod 26. We can check this as follows:
>> mod(M2*M,26)ans =1 0 00 1 00 0 1
To decrypt, we break the ciphertext into blocks of three numbers and multiply each block on the right by the inverse matrix we just calculated:
>> mod([22,9,0]*M2,26)ans =14 21 4>> mod([12,3,1]*M2,26)ans =17 19 7>> mod([10,3,4]*M2,26)ans =4 7 8>> mod([8,1,17]*M2,26)ans =11 11 23
Therefore, the plaintext is 14, 21, 4, 17, 19, 7, 4, 7, 8, 11, 11, 23. This can be changed back to letters:
>> int2text([14 21 4 17 19 7 4 7 8 11 11 23])ans =overthehillx
Note that the final x was appended to the plaintext in order to complete a block of three letters.
Two functions, nextprime and randprime, can be used to generate prime numbers. The function nextprime takes a number as input and attempts to find the next prime after . The function randprime takes a number as input and attempts to find a random prime between and . It uses the Miller-Rabin test described in Chapter 9.
>> nextprime(346735)ans =346739>> randprime(888888)ans =737309
For larger inputs, use symbolic mode:
>> nextprime(10^sym(60))ans =1000000000000000000000000000000000000000000000000000000000007>> randprime(10^sym(50))ans =58232516535825662451486550708068534731864929199219
It is interesting to note the difference that the ’ ’ makes when entering a large integer:
>> nextprime(sym(’123456789012345678901234567890’))ans =123456789012345678901234567907>> nextprime(sym(123456789012345678901234567890))ans =123456789012345677877719597071
In the second case, the input was a number, so only the first 16 digits of the input were used correctly when changing to symbolic mode, while the first case regarded the entire input as a string and therefore used all of the digits.
Suppose you want to change the text hellohowareyou to numbers:
>> text2int1(’hellohowareyou’)ans =805121215081523011805251521
Note that we are now using a = 1, b = 2, ..., z = 26, since otherwise a’s at the beginnings of messages would disappear. (A more efficient procedure would be to work in base 27, so the numerical form of the message would be . Note that this uses fewer digits.)
Now suppose you want to change it back to letters:
>> int2text1(805121215081523011805251521)ans =’hellohowareyou’
Encrypt the message hi using RSA with and .
SOLUTION
First, change the message to numbers:
>> text2int1(’hi’)ans =809
Now, raise it to the th power mod :
>> powermod(ans,17,823091)ans =596912
Decrypt the ciphertext in the previous problem.
SOLUTION
First, we need to find the decryption exponent . To do this, we need to find . One way is
>> eulerphi(823091)ans =821184
Another way is to factor as and then compute :
>> factor(823091)ans =659 1249>> 658*1248ans =821184
Since , we compute the following (note that we are finding the inverse of mod , not mod ):
>> invmodn(17,821184)ans =48305
Therefore, . To decrypt, raise the ciphertext to the th power mod :
>> powermod(596912,48305,823091)ans =809
Finally, change back to letters:
>> int2text1(ans)ans =hi
Encrypt hellohowareyou using RSA with and .
SOLUTION
First, change the plaintext to numbers:
>> text2int1(’hellohowareyou’)ans =805121215081523011805251521
Suppose we simply raised this to the th power mod :
>> powermod(ans,17,823091)ans =447613
If we decrypt (we know from Example 25), we obtain
>> powermod(ans,48305,823091)ans =628883
This is not the original plaintext. The reason is that the plaintext is larger than , so we have obtained the plaintext mod :
>> mod(text2int1(’hellohowareyou’),823091)ans =628883
We need to break the plaintext into blocks, each less than . In our case, we use three letters at a time:
>> powermod(80512,17,823091)ans =757396>> powermod(121508,17,823091)ans =164513>> powermod(152301,17,823091)ans =121217>> powermod(180525,17,823091)ans =594220>> powermod(1521,17,823091)ans =442163
The ciphertext is therefore 757396164513121217594220442163. Note that there is no reason to change this back to letters. In fact, it doesn’t correspond to any text with letters.
Decrypt each block individually:
>> powermod(757396,48305,823091)ans =80512>> powermod(164513,48305,823091)ans =121508
Etc.
We’ll now do some examples with large numbers, namely the numbers in the RSA Challenge discussed in Section 9.5. These are stored under the names rsan, rsae, rsap, rsaq:
>> rsanans =114381625757888867669235779976146612010218296721242362562561842935 706935245733897830597123563958705058989075147599290026879543541>> rsaeans =9007
Encrypt each of the messages b, ba, bar, bard using rsan and rsae.
>> powermod(text2int1(’b’), rsae, rsan)ans =709467584676126685983701649915507861828763310606852354105647041144 86782261716497200122155332348462014053287987580899263765142534>> powermod(text2int1(’ba’), rsae, rsan)ans =350451306089751003250117094498719542737882047539485930603136976982 27621759806027962270538031565564773352033671782261305796158951>> powermod(txt2int1(’bar’), rsae, rsan)ans =448145128638551010760045308594921093424295316066074090703605434080 00843645986880405953102818312822586362580298784441151922606424>> powermod(text2int1(’bard’), rsae, rsan)ans =242380777851116664232028625120903173934852129590562707831349916142 56054323297179804928958073445752663026449873986877989329909498
Observe that the ciphertexts are all the same length. There seems to be no easy way to determine the length of the corresponding plaintext.
Using the factorization , find the decryption exponent for the RSA Challenge, and decrypt the ciphertext (see Section 9.5).
SOLUTION
First, we find the decryption exponent:
>> rsad=invmodn(rsae,-1,(rsap-1)*(rsaq-1));
Note that we use the final semicolon to avoid printing out the value. If you want to see the value of rsad, see Section 9.5, or don’t use the semicolon. To decrypt the ciphertext, which is stored as rsaci, and change to letters:
>> int2text1(powermod(rsaci, rsad, rsan))ans =the magic words are squeamish ossifrage
Encrypt the message rsaencryptsmessageswell using rsan and rsae.
>> ci = powermod(text2int1(’rsaencryptsmessageswell’), rsae, rsan)ci =946394203490022593163058235392494964146409699340017097214043524182 71950654254365584906013966328817753539283112653197553130781884
We called the ciphertext ci because we need it in Example 30.
Decrypt the preceding ciphertext.
SOLUTION
Fortunately, we know the decryption exponent rsad. Therefore, we compute
>> powermod(ans, rsad, rsan)ans =1819010514031825162019130519190107051923051212>> int2text1(ans)ans =rsaencryptsmessageswell
Suppose we lose the final 4 of the ciphertext in transmission. Let’s try to decrypt what’s left (subtracting 4 and dividing by 10 is a mathematical way to remove the 4): ‘ >> powermod((ci - 4)/10, rsad, rsan)
ans =4795299917319598866490235262952548640911363389437562984685490797 0588412300373487969657794254117158956921267912628461494475682806
If we try to change this to letters, we get a weird-looking answer. A small error in the plaintext completely changes the decrypted message and usually produces garbage.
Suppose we are told that is the product of two primes and that . Factor .
SOLUTION
We know (see Section 9.1) that and are the roots of . Therefore, we compute (vpa is for variable precision arithmetic)
>> digits(50); syms y; vpasolve(y^2-(sym(’11313771275590312567’) -sym(’11313771187608744400’)+1)*y+sym(’11313771275590312567’),y)ans =128781017.0 87852787151.0
Therefore, . We also could have used the quadratic formula to find the roots.
Suppose we know rsae and rsad. Use these to factor rsan.
SOLUTION
We use the factorization method from Section 9.4. First write with odd. One way to do this is first to compute
>> rsae*rsad - 1ans =9610344196177822661569190233595838341098541290518783302506446040 41155985575087352659156174898557342995131594680431086921245830097664>> ans/2ans =4805172098088911330784595116797919170549270645259391651253223020 20577992787543676329578087449278671497565797340215543460622915048832>> ans/2ans =2402586049044455665392297558398959585274635322629695825626611510 10288996393771838164789043724639335748782898670107771730311457524416
We continue this way for six more steps until we get
ans =3754040701631961977175464934998374351991617691608899727541580484 535765568652684971324828808197489621074732791720433933286116523819
This number is . Now choose a random integer . Hoping to be lucky, we choose 13. As in the factorization method, we compute
>>b0=powermod(13, ans, rsan)b0 =2757436850700653059224349486884716119842309570730780569056983964 7030183109839862370800529338092984795490192643587960859870551239
Since this is not , we successively square it until we get :
>> b1=powermod(b0,2,rsan)b1 =4831896032192851558013847641872303455410409906994084622549470277 6654996412582955636035266156108686431194298574075854037512277292>> b2=powermod(b1,2,rsan)b2 =7817281415487735657914192805875400002194878705648382091793062511 5215181839742056013275521913487560944732073516487722273875579363>> b3=powermod(b2, 2, rsan)b3 =4283619120250872874219929904058290020297622291601776716755187021 6509444518239462186379470569442055101392992293082259601738228702>> b4=powermod(b3, 2, rsan)b4 =1
Since the last number before the 1 was not , we have an example of with . Therefore, is a nontrivial factor of rsan:
>> gcd(b3 - 1, rsan)ans =32769132993266709549961988190834461413177642967992942539798288533
This is rsaq. The other factor is obtained by computing rsan/rsaq:
>> rsan/ansans =3490529510847650949147849619903898133417764638493387843990820577
This is rsap.
Suppose you know that
Factor 205611444308117.
SOLUTION
We use the Basic Principle of Section 9.4.
>> g= gcd(150883475569451-16887570532858,205611444308117)g =23495881
This gives one factor. The other is
>> 205611444308117/gans =8750957
We can check that these factors are actually primes, so we can’t factor any further:
>> primetest(ans)ans =1>> primetest(g)ans =1
Factor by the method.
SOLUTION
Let’s choose our bound as , and let’s take , so we compute :
>> powermod(2,factorial(100),sym(’37687557542639485559998999 2897873239’)ans =369676678301956331939422106251199512
Then we compute the gcd of and :
>> gcd(ans - 1, sym’(376875575426394855599989992897873239’)ans =430553161739796481
This is a factor . The other factor is
>> sym(’376875575426394855599989992897873239’)/ansans =875328783798732119
Let’s see why this worked. The factorizations of and are
>> factor(sym(’430553161739796481’) - 1)ans =[ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 5, 7, 7, 7, 7, 11, 11, 11, 47]>> factor(sym(’875328783798732119’) - 1)ans =[ 2, 61, 20357, 39301, 8967967]
We see that is a multiple of , so . However, is not a multiple of , so it is likely that . Therefore, both and have as a factor, but only has as a factor. It follows that the gcd is .
Let’s solve the discrete log problem by the Baby Step-Giant Step method of Subsection 10.2.2. We take since and we form two lists. The first is for :
>> for k=0:11;z=[k, powermod(2,k,131)];disp(z);end;0 11 22 43 84 165 326 647 1288 1259 11910 10711 83
The second is for :
>> for k=0:11;z=[k, mod(71*invmodn(powermod(2,12*k,131),131),131)]; disp(z);end;0 711 172 1243 264 1285 866 1117 938 859 9610 13011 116
The number 128 is on both lists, so we see that . Therefore,
Suppose there are 23 people in a room. What is the probability that at least two have the same birthday?
SOLUTION
The probability that no two have the same birthday is (note that the product stops at , not ). Subtracting from 1 gives the probability that at least two have the same birthday:
>> 1-prod( 1 - (1:22)/365)ans =0.5073
Suppose a lazy phone company employee assigns telephone numbers by choosing random seven-digit numbers. In a town with 10,000 phones, what is the probability that two people receive the same number?
>> 1-prod( 1 - (1:9999)/10^7)ans =0.9933
Note that the number of phones is about three times the square root of the number of possibilities. This means that we expect the probability to be high, which it is. From Section 12.1, we have the estimate that if there are around phones, there should be a 50% chance of a match. Let’s see how accurate this is:
>> 1-prod( 1 - (1:3722)/10^7)ans =0.4999
Suppose we have a (5, 8) Shamir secret sharing scheme. Everything is mod the prime . Five of the shares are
Find the secret.
SOLUTION
The function interppoly(x,f,m) calculates the interpolating polynomial that passes through the points . The arithmetic is done mod .
In order to use this function, we need to make a vector that contains the values, and another vector that contains the share values. This can be done using the following two commands:
>> x=[9853 4421 6543 93293 12398];>> s=[853 4387 1234 78428 7563];
Now we calculate the coefficients for the interpolating polynomial.
>> y=interppoly(x,s,987541)y =678987 14728 1651 574413 456741
The first value corresponds to the constant term in the interpolating polynomial and is the secret value. Therefore, 678987 is the secret.
Here is a game you can play. It is essentially the simplified version of poker over the telephone from Section 18.2. There are five cards: ten, jack, queen, king, ace. We have chosen to abbreviate them by the following: ten, ace, que, jac, kin. They are shuffled and disguised by raising their numbers to a random exponent mod the prime 300649. You are supposed to guess which one is the ace.
First, the cards are entered in and converted to numerical values by the following steps:
>> cards=[’ten’;’ace’;’que’;’jac’;’kin’];>> cvals=text2int1(cards)cvals =20051410305172105100103110914
Next, we pick a random exponent that will be used in the hiding operation. We use the semicolon after khide so that we cannot cheat and see what value of is being used.
>> p=300649;
>> k=khide(p);
Now, shuffle the disguised cards (their numbers are raised to the th power mod and then randomly permuted):
>> shufvals=shuffle(cvals,k,p)shufvals =226536226058241033281258116809
These are the five cards. None looks like the ace; that’s because their numbers have been raised to powers mod the prime. Make a guess anyway. Let’s see if you’re correct.
>> reveal(shufvals,k,p)ans =jacquetenkinace
Let’s play again:
>> k=khide(p);» shufvals=shuffle(cvals,k,p)shufvals =117135144487108150266322264045
Make your guess (note that the numbers are different because a different random exponent was used). Were you lucky?
>> reveal(shufvals,k,p)ans =kinjactenqueace
Perhaps you need some help. Let’s play one more time:
>> k=khide(p);» shufvals=shuffle(cvals,k,p)shufvals =108150144487266322264045117135
We now ask for advice:
>> advise(shufvals,p);
Ace Index: 4
We are advised that the fourth card is the ace. Let’s see:
>> reveal(shufvals,k,p)ans =tenjacqueacekin
How does this work? Read the part on “How to Cheat” in Section 18.2. Note that if we raise the numbers for the cards to the power mod , we get
>> powermod(cvals,(p-1)/2,p)ans =1300648111
Therefore, only the ace is a quadratic nonresidue mod .
We want to graph the elliptic curve .
First, we create a string that contains the equation we wish to graph.
>> v=’y^2 - x*(x-1)*(x+1)’;
Next we use the ezplot command to plot the elliptic curve.
>> ezplot(v,[-1,3,-5,5])
The plot appears in Figure C.1. The use of in the preceding command is to define the limits of the -axis and -axis in the plot.
Add the points (1,3) and (3,5) on the elliptic curve .
>> addell([1,3],[3,5],24,13,29)ans =26 1
You can check that the point (26,1) is on the curve: . (Note: addell([x,y],[u,v],b,c,n) is only programmed to work for odd .)
Add (1,3) to the point at infinity on the curve of the previous example.
>> addell([1,3],[inf,inf],24,13,29)ans =1 3
As expected, adding the point at infinity to a point returns the point .
Let be a point on the elliptic curve . Find .
>> multell([1,3],7,24,13,29)ans =15 6
Find for on the curve of the previous example.
>> multsell([1,3],40,24,13,29)ans =1: 1 32: 11 103: 23 284: 0 105: 19 96: 18 197: 15 68: 20 249: 4 1210: 4 1711: 20 512: 15 2313: 18 1014: 19 2215: 0 1916: 23 117: 11 1918: 1 2619: inf Inf20: 1 321: 10 1022: 23 2823: 0 1024: 19 725: 18 1926: 15 627: 20 2428: 4 1229: 4 1730: 20 531: 15 2332: 18 1033: 19 2234: 0 1935: 23 136: 11 1937: 1 2638: inf inf39: 1 340: 10 10
Notice how the points repeat after every 19 multiples.
The previous four examples worked mod the prime 29. If we work mod a composite number, the situation at infinity becomes more complicated since we could be at infinity mod both factors or we could be at infinity mod one of the factors but not mod the other. Therefore, we stop the calculation if this last situation happens and we exhibit a factor. For example, let’s try to compute , where is on the elliptic curve :
>> multell([1,3],12,-5,13,11*19)Elliptic Curve addition produced a factor of n, factor= 19Multell found a factor of n and exitedans =[]
Now let’s compute the successive multiples to see what happened along the way:
>> multsell([1,3],12,-5,13,11*19)Elliptic Curve addition produced a factor of n, factor= 19Multsell ended early since it found a factorans =1: 1 32: 91 273: 118 1334: 148 1825: 20 35
When we computed , we ended up at infinity mod 19. Let’s see what is happening mod the two prime factors of 209, namely 19 and 11:
>> multsell([1,3],20,-5,13,19)ans =1: 1 32: 15 83: 4 04: 15 115: 1 166: Inf Inf7: 1 38: 15 89: 4 010: 15 1111: 15 812: Inf Inf13: 1 314: 15 815: 4 016: 15 1117: 1 1618: Inf Inf19: 1 320: 15 8
>> multsell([1,3],20,-5,13,11)ans =1: 1 32: 3 53: 8 14: 5 65: 9 26: 6 107: 2 08: 6 19: 9 910: 5 511: 8 1012: 3 613: 1 814: Inf Inf15: 1 316: 3 517: 8 118: 5 619: 9 220: 6 10
After six steps, we were at infinity mod 19, but it takes 14 steps to reach infinity mod 11. To find , we needed to invert a number that was 0 mod 19 and nonzero mod 11. This couldn’t be done, but it yielded the factor 19. This is the basis of the elliptic curve factorization method.
Factor 193279 using elliptic curves.
SOLUTION
First, we need to choose some random elliptic curves and a point on each curve. For example, let’s take and the elliptic curve
For to lie on the curve, we take . We’ll also take
Now we compute multiples of the point . We do the analog of the method, so we choose a bound , say , and compute .
>> multell([2,4],factorial(12),-10,28,193279)Elliptic Curve addition produced a factor of n, factor= 347Multell found a factor of n and exitedans =[]>> multell([1,1],factorial(12),11,-11,193279)ans =13862 35249» multell([1,2],factorial(12),17,-14,193279)Elliptic Curve addition produced a factor of n, factor= 557Multell found a factor of n and exitedans =[]
Let’s analyze in more detail what happened in these examples.
On the first curve, ends up at infinity mod 557 and is infinity mod 347. Since , it has a prime factor larger than , so is not infinity mod 557. But divides , so is infinity mod 347.
On the second curve, mod 347, and mod 557. Since and , we don’t expect to find the factorization with this curve.
The third curve is a surprise. We have mod 347 and mod 557. Since is prime and , we don’t expect to find the factorization with this curve. However, by chance, an intermediate step in the calculation of yielded the factorization. Here’s what happened. At an intermediate step in the calculation, the program required adding the points and . These two points are congruent mod 557 but not mod 347. Therefore, the slope of the line through these two points is defined mod 347 but is mod 557. When we tried to find the multiplicative inverse of the denominator mod 193279, the gcd algorithm yielded the factor 557. This phenomenon is fairly rare.
Here is how to produce the example of an elliptic curve ElGamal cryptosystem from Section 21.5. For more details, see the text. The elliptic curve is and the point is . Alice’s message is the point .
Bob has chosen his secret random number and has computed :
>> multell([4,11],3,3,45,8831)ans =413 1808
Bob publishes this point. Alice chooses the random number and computes and :
>> multell([4,11],8,3,45,8831)ans =5415 6321>> addell([5,1743],multell([413,1808],8,3,45,8831),3,45,8831)ans =6626 3576
Alice sends (5415,6321) and (6626, 3576) to Bob, who multiplies the first of these point by :
>> multell([5415,6321],3,3,45,8831)
ans =
673 146
Bob then subtracts the result from the last point Alice sends him. Note that he subtracts by adding the point with the second coordinate negated:
>> addell([6626,3576],[673,-146],3,45,8831)ans =5 1743
Bob has therefore received Alice’s message.
Let’s reproduce the numbers in the example of a Diffie-Hellman key exchange from Section 21.5: The elliptic curve is and the point is . Alice chooses her secret and Bob chooses his secret . Alice calculates
>> multell([3,5],12,1,7206,7211)ans =1794 6375
She sends (1794,6375) to Bob. Meanwhile, Bob calculates
>> multell([3,5],23,1,7206,7211)ans =3861 1242
and sends (3861,1242) to Alice. Alice multiplies what she receives by and Bob multiplies what he receives by :
>> multell([3861,1242],12,1,7206,7211)ans =1472 2098>> multell([1794,6375],23,1,7206,7211)ans =1472 2098
Therefore, Alice and Bob have produced the same key.
Sage is an open-source computer algebra package. It can be downloaded for free from www.sagemath.org/ or it can be accessed directly online at the website https:/
Suppose you want to encrypt the plaintext This is the plaintext with a shift of 3. We first encode it as an alphabetic string of capital letters with the spaces removed. Then we shift each letter by three positions:
S=ShiftCryptosystem(AlphabeticStrings())P=S.encoding("This is the plaintext")C=S.enciphering(3,P);C
When this is evaluated, we obtain the ciphertext
WKLVLVWKHSODLQWHAW
To decrypt, we can shift by or do the following:
S.deciphering(3,C)
When this is evaluated, we obtain
THISISTHEPLAINTEXT
Suppose we don’t know the key and we want to decrypt by trying all possible shifts:
S.brute_force(C)
Evaluation yields
0: WKLVLVWKHSODLQWHAW,1: VJKUKUVJGRNCKPVGZV,2: UIJTJTUIFQMBJOUFYU,3: THISISTHEPLAINTEXT,4: SGHRHRSGDOKZHMSDWS,5: RFGQGQRFCNJYGLRCVR,6: etc.24: YMNXNXYMJUQFNSYJCY,25: XLMWMWXLITPEMRXIBX
Let’s encrypt the plaintext This is the plaintext using the affine function mod 26:
A=AffineCryptosystem(AlphabeticStrings())P=A.encoding("This is the plaintext")C=A.enciphering(3,1,P);C
When this is evaluated, we obtain the ciphertext
GWZDZDGWNUIBZOGNSG
To decrypt, we can do the following:
A.deciphering(3,1,C)
When this is evaluated, we obtain
THISISTHEPLAINTEXT
We can also find the decryption key:
A.inverse_key(3,1)
This yields
(9, 17)
Of course, if we “encrypt” the ciphertext using , we obtain the plaintext:
A.enciphering(9,17,C)
Evaluate to obtain
THISISTHEPLAINTEXT
Let’s encrypt the plaintext This is the plaintext using the keyword ace (that is, shifts of 0, 2, 4). Since we need to express the keyword as an alphabetic string, it is efficient to add a symbol for these strings:
AS=AlphabeticStrings()V=VigenereCryptosystem(AS,3)K=AS.encoding("ace")P=V.encoding("This is the plaintext")C=V.enciphering(K,P);C
The “3” in the expression for V is the length of the key. When the above is evaluated, we obtain the ciphertext
TJMSKWTJIPNEIPXEZX
To decrypt, we can shift by or do the following:
V.deciphering(K,C)
When this is evaluated, we obtain
THISISTHEPLAINTEXT
Now let’s try the example from Section 2.3. The ciphertext can be cut and pasted from ciphertexts.m in the MATLAB files (or, with a little more difficulty, from the Mathematica or Maple files). A few control symbols need to be removed in order to make the ciphertext a single string.
vvhq="vvhqwvvrhmusgjgthkihtssejchlsfcbgvwcrlryqtfs . . . czvile"
(We omitted part of the ciphertext in the above in order to save space.) Now let’s compute the matches for various displacements. This is done by forming a string that displaces the ciphertext by positions by adding blank spaces at the beginning and then counting matches.
for i in range(0,7):C2 = [" "]*i + list(C)count = 0for j in range(len(C)):if C2[j] == C[j]:count += 1print i, count
The result is
0 3311 142 143 164 145 246 12
The 331 is for a displacement of 0, so all 331 characters match. The high number of matches for a displacement of 5 suggests that the key length is 5. We now want to determine the key.
First, let’s choose every fifth letter, starting with the first (counted as 0 for Sage). We extract these letters, put them in a list, then count the frequencies.
V1=list(C[0::5])dict((x, V1.count(x)) for x in V1)
The result is
C: 7,D: 1,E: 1,F: 2,G: 9,I: 1,J: 8,K: 8,N: 3,P: 4,Q: 5,R: 2,T: 3,U: 6,V: 5,W: 1,Y: 1
Note that A, B, H, L, M, O. S, X, Z do not occur among the letters, hence are not listed. As discussed in Subsection 2.3.2 , the shift for these letters is probably 2. Now, let’s choose every fifth letter, starting with the second (counted as 1 for Sage). We compute the frequencies:
V2=list(C[1::5])dict((x, V2.count(x)) for x in V2)A: 3,B: 3,C: 4,F: 3,G: 10,H: 6,M: 2,O: 3,P: 1,Q: 2,R: 3,S: 12,T: 3,U: 2,V: 3,W: 3,Y: 1,Z: 2
As in Subsection 2.3.2 , the shift is probably 14. Continuing in this way, we find that the most likely key is , which is codes. let’s decrypt:
V=VigenereCryptosystem(AS,5)K=AS.encoding("codes")P=V.deciphering(K,C);PTHEMETHODUSEDFORTHEPREPARATIONANDREADINGOFCODEMES . . . ALSETC
To find the greatest common divisor, type the following first line and then evaluate:
gcd(119, 259)7
To find the next prime greater than or equal to a number:
next_prime(1000)1009
To factor an integer:
factor(2468)2^2 * 617
Let’s solve the simultaneous congruences :
crt(1,3,5,7)31
To solve the three simultaneous congruences :
a= crt(1,3,5,7)crt(a,0,35,11)66
Compute :
mod(123,789)456699
Compute so that :
mod(65,987)(-1)410
Let’s check the answer:
mod(65*410, 987)1
Consider the recurrence relation mod 2, with initial values . We need to use 0s and 1s, but we need to tell Sage that they are numbers mod 2. One way is to define “o” (that’s a lower-case “oh”) and “l” (that’s an “ell”) to be 0 and 1 mod 2:
F=GF(2)o=F(0); l=F(1)
We also could use F(0) every time we want to enter a 0, but the present method saves some typing. Now we specify the coefficients and initial values of the recurrence relation, along with how many terms we want. In the following, we ask for 20 terms:
s=lfsr_sequence([l,l,o,l],[l,o,o,o],20);s
This evaluates to
[1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0]
Suppose we are given these terms of a sequence and we want to find what recurrence relation generates it:
berlekamp_massey(s)
This evaluates to
x^4 + x^3 + x + 1
When this is interpreted as mod 2, we see that the coefficients 1, 1, 0, 1 of the polynomial give the coefficients of recurrence relation. In fact, it gives the smallest relation that generates the sequence.
Note: Even though the output for s has 0s and 1s, if we try entering the command berlekamp_massey([1,1,0,1,1,0]) we get x^3 - 1. If we instead enter berlekamp_massey([l,l,o,l,l,o]), we get x^2 + x + 1. Why? The first is looking at a sequence of integers generated by the relation while the second is looking at the sequence of integers mod 2 generated by mod 2. Sage defaults to integers if nothing is specified. But it remembers that the 0s and 1s that it wrote in s are still integers mod 2.
Let’s encrypt the plaintext This is the plaintext using a matrix. First, we need to specify that we are working with such a matrix with entries in the integers mod 26:
R=IntegerModRing(26)M=MatrixSpace(R,3,3)
Now we can specify the matrix that is the encryption key:
K=M([[1,2,3],[4,5,6],[7,8,10]]);K
Evaluate to obtain
[ 1 2 3][ 4 5 6][ 7 8 10]
This is the encryption matrix. We can now encrypt:
H=HillCryptosystem(AlphabeticStrings(),3)P=H.encoding("This is the plaintext")C=H.enciphering(K,P);C
If the length of the plaintext is not a multiple of 3 (= the size of the matrix), then extra characters need to be appended to achieve this. When the above is evaluated, we obtain the ciphertext
ZHXUMWXBJHHHLZGVPC
Decrypt:
H.deciphering(K,C)
When this is evaluated, we obtain
THISISTHEPLAINTEXT
We could also find the inverse of the encryption matrix mod 26:
K1=K.inverse();K1
This evaluates to
[ 8 16 1][ 8 21 24][ 1 24 1]
When we evaluate
H.enciphering(K,C):C
we obtain
THISISTHEPLAINTEXT
Suppose someone unwisely chooses RSA primes and to be consecutive primes:
p=nextprime(987654321*10^50+12345); q=nextprime(p+1)n=p*q
Let’s factor the modulus
without using the factor command:
s=N(sqrt(n), digits=70)p1=next_prime(s)p1, q1(98765432100000000000000000000000000000000000000000000012773,98765432100000000000000000000000000000000000000000000012617)
Of course, the fact that and are consecutive primes is important for this calculation to work. Note that we needed to specify 70-digit accuracy so that round-off error would not give us the wrong starting point for looking for the next prime. These factors we obtained match the original and , up to order:
p, q(98765432100000000000000000000000000000000000000000000012617,98765432100000000000000000000000000000000000000000000012773)
Let’s solve the discrete log problem by the Baby Step-Giant Step method of Subsection 10.2.2. We take since and we form two lists. The first is for :
for i in range(0,12): print i, mod(2,131)i0 11 22 43 84 165 326 647 1288 1259 11910 10711 83
The second is for :
for i in range(0,12): print i, mod(71*mod(2,131)(-12*i),131)0 711 172 1243 264 1285 866 1117 938 859 9610 13011 116
The number 128 is on both lists, so we see that . Therefore,
Suppose a lazy phone company employee assigns telephone numbers by choosing random seven-digit numbers. In a town with 10,000 phones, what is the probability that two people receive the same number?
i = var(’i’) 1-product(1.-i/10^7,i,1,9999)0.9932699132835016
Suppose we want to find the polynomial of degree at most 3 that passes through the points mod the prime . We first need to specify that we are working with polynomials in mod 37. Then we compute the polynomial:
R=PolynomialRing(GF(37),"x")f=R.lagrange_polynomial([(1,1),(2,2),(3,21), (5,12)]);f
This evaluates to
22*x^3 + 25*x^2 + 31*x + 34
If we want the constant term:
f(0)43
Here is a game you can play. It is essentially the simplified version of poker over the telephone from Section 18.2.
There are five cards: ten, jack, queen, king, ace. They are shuffled and disguised by raising their numbers to a random exponent mod the prime 24691313099. You are supposed to guess which one is the ace.
The cards are represented by ten = 200514, etc., because t is the 20th letter, e is the 5th letter, and n is the 14th letter.
Type the following into Sage. The value of forces the randomization to have 248 as its starting point, so the random choices of and shuffle do not change when we repeat the process with this value of . Varying gives different results.
cards=[200514,10010311,1721050514,11091407,10305]p=24691313099k=248 set_random_seed(k)e=randint(10,10^7)def pow(list,ex):ret = []for i in list:ret.append(mod(i,p)^ex)return rets=pow(cards,2*e+1)shuffle(s)print(s)
Evaluation yields
[10426004161, 16230228497, 12470430058, 3576502017, 2676896936]
These are the five cards. None looks like the ace; that’s because their numbers have been raised to powers mod the prime. Make a guess anyway. Let’s see if you’re correct.
Add the following line to the end of the program:
print(pow(s,mod(2*e+1,p-1)^(-1)))
and evaluate again:
[10426004161, 16230228497, 12470430058, 3576502017, 2676896936][10010311, 200514, 11091407, 10305, 1721050514]
The first line is the shuffled and hidden cards. The second line removed the th power and reveals the cards. The fourth card is the ace. Were you lucky?
If you change the value of , you can play again, but let’s figure out how to cheat. Remove the last line of the program that you just added and replace it with
pow(s,(p-1)/2)
When the program is evaluated, we obtain
[10426004161, 16230228497, 12470430058, 3576502017, 2676896936][1, 1, 1, 24691313098, 1]
Why does this tell us the ace is in the fourth position? Read the part on “How to Cheat” in Section 18.2. Raise the numbers for the cards to the power mod (you can put this extra line at the end of the program and ignore the previous output):
print(pow(cards,(p-1)/2))[1, 1, 1, 1, 24691313098]
We see that the prime was chosen so that only the ace is a quadratic nonresidue mod .
If you input another value of and play the game again, you’ll have a similar situation, but with the cards in a different order.
Let’s set up the elliptic curve mod 7:
E=EllipticCurve(IntegerModRing(7),[2,3])
The entry [2,3] gives the coefficients of the polynomial . More generally, we could use the vector [a,b,c,d,e] to specify the coefficients of the general form . We could also use GF(7) instead of IntegerModRing(7) to specify that we are working mod 7. We could replace the 7 in IntegerModRing(7) with a composite modulus, but this will sometimes result in error messages when adding points (this is the basis of the elliptic curve factorization method).
We can list the points on :
E.points()[(0:1:0), (2:1:1), (2:6:1), (3:1:1), (3:6:1), (6:0:1)]
These are given in projective form. The point (0:1:0) is the point at infinity. The point (2:6:1) can also be written as [2, 6]. We can add points:
E([2,1])+E([3,6])(6 : 0 : 1)E([0,1,0])+E([2,6])(2 : 6 : 1)
In the second addition, we are adding the point at infinity to the point and obtaining . This is an example of . We can multiply a point by an integer:
5*E([2,1])(2 : 6 : 1)
We can list the multiples of a point in a range:
for i in range(10):print(i,i*E([2,6]))(0, (0 : 1 : 0))(1, (2 : 6 : 1))(2, (3 : 1 : 1))(3, (6 : 0 : 1))(4, (3 : 6 : 1))(5, (2 : 1 : 1))(6, (0 : 1 : 0))(7, (2 : 6 : 1))(8, (3 : 1 : 1))(9, (6 : 0 : 1))
The indentation of the print line is necessary since it indicates that this is iterated by the for command. To count the number of points on :
E.cardinality()6
Sage has a very fast point counting algorithm (due to Atkins, Elkies, and Schoof; it is much more sophisticated than listing the points, which would be infeasible). For example,
p=next_prime(10^50)E1=EllipticCurve(IntegerModRing(p),[2,3])n=E1.cardinality()p, n, n-(p+1)(100000000000000000000000000000000000000000000000151,99999999999999999999999999112314733133761086232032,-887685266866238913768120)
As you can see, the number of points on this curve (the second output line) is close to . In fact, as predicted by Hasse’s theorem, the difference (on the last output line) is less in absolute value than .
5. The ciphertext is QZNHOBXZD. The decryption function is .
The possible values for are 1,7,11,13,17,19,23,29.
There are many such possible answers, for example and will work. These correspond to the letters ’b’ and ’e’.
The key is AB. The original plaintext is BBBBBBABBB.
One possibility: .
Decryption is performed by raising the ciphertext to the 13th power mod 31.
, or .
No solutions.
If with , then either of . Otherwise, . If , let be a prime factor of .
.
No prime less than or equal to divides 257 because of the gcd calculation.
The gcd is 257.
and .
The gcd is 1.
The gcd is 1.
The gcd is 1.
Use the Corollary in Section 3.2.
Imitate the proof of the Corollary in Section 3.2.
19. The smallest number is 58 and the next smallest number is 118.
.
.
.
The last digit is 3.
.
.
No solutions.
There are solutions.
No solutions.
0.
.
3. The conditional probability is 0. Affine ciphers do not have perfect secrecy.
1/2.
is a possibility.
Possible.
Possible.
Impossible.
.
.
Alice’s method is more secure.
Compatibility with single encryption.
Switch left and right halves and use the same procedure as encryption. Then switch the left and right of the final output.
After two rounds, the ciphertext alone lets you determine and therefore , but not or individually. If you also know the plaintext, you know are therefore can deduce .
Three rounds is very insecure.
3. The ciphertext from the second message can be decrypted to yield the password.
The keys for each round are all 1s, so using them in reverse order doesn’t change anything.
All 0s.
7. Show that when and are used, the input to the S-boxes is the same as when and are used.
.
Imitate the proof that RSA decryption works.
25. Combine the first three congruences. Ignore the fourth congruence.
1000000 messages.
Finding square roots is computationally equivalent to factoring.
for all
9. (a) and (b) Let be either of the hash functions. Given of length , we have .
1. Use the Birthday attack. Eve will probably factor some moduli.
0101 and 0110.
Enigma does not encrypt a letter to itself, so DOG is impossible.
If the first of two long plaintexts is encrypted with Enigma, it is very likely that at least one letter of the second plaintext will match a letter of the ciphertext. More precisely, each individual letter of the second plaintext that doesn’t match the first plaintext has probability around 1/26 of matching, so the probability is that there is a match between the second plaintext and the ciphertext. Therefore, Enigma does not have ciphertext indistinguishability.
We have . Therefore
Since , we have . Therefore,
Multiply by and raise to these exponents to obtain
This may be rewritten as
Since and , we have
The only place and are used in the verification procedure is in checking that .
The Spender spends the coin correctly once, using . The Spender then chooses any two random numbers with and uses the coin with the Vendor, with in place of . All the verification equations work.
7. Fred only needs to keep the hash of the file on his own computer.
The secret is .
Nelson computes a square root of and , then combines them to obtain a square root of .
Use the factorization method.
No.
Step 4: Victor randomly chooses or 2 and asks Peggy for .
Step 5: Victor checks that .
They repeat steps 1 through 5 at least 7 times (since ).
One way: Step 4: Victor chooses at random and asks for and . Then five repetitions are enough. Another way: Victor asks for only one of the . Then twelve repetitions suffice.
Choose . Then solve for .
.
.
.
.
She factors 35.
.
Eve knows . She computes
Eve now computes and XORs it with to get .
The original message is 0,1,0,0.
The original message is 0,1,0,1.
and .
.
.
.
19. The error is in the 3rd position. The corrected vector is (1,0,0,1,0,1,1).
The period is 4.
.
.
For the history of cryptography, see [Kahn] and [Bauer].
For additional treatment of topics in the present book, and many other topics, see [Stinson], [Stinson1], [Schneier], [Mao], and [Menezes et al.]. These books also have extensive bibliographies.
An approach emphasizing algebraic methods is given in [Koblitz].
For the theoretical foundations of cryptology, see [Goldreich1] and [Goldreich2]. See [Katz-Lindell] for an approach based on security proofs.
Books that are oriented toward protocols and practical network security include [Stallings], [Kaufman et al.], and [Aumasson].
For a guidelines on properly applying cryptographic algorithms, the reader is directed to [Ferguson-Schneier]. For a general discussion on securing computing platforms, see [Pfleeger-Pfleeger].
The Internet, of course, contains a wealth of information about cryptographic issues. The Cryptology ePrint Archive server at http://eprint.iacr.org/ contains very recent research. Also, the conference proceedings CRYPTO, EUROCRYPT, and ASIACRYPT (published in Springer-Verlag’s Lecture Notes in Computer Science series) contain many interesting reports on recent developments.
[Adrian et al.] Adrian et al., "Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice," https:/.
[Agrawal et al.] M. Agrawal, N. Kayal, and N. Saxena, “PRIMES is in P,” Annals of Math. 160 (2004), 781–793.
[Alford et al.] W. R. Alford, A. Granville, and C. Pomerance, “On the difficulty of finding reliable witnesses,” Algorithmic Number Theory, Lecture Notes in Computer Science 877, Springer-Verlag, 1994, pp. 1–16.
[Alford et al. 2] W. R. Alford, A. Granville, and C. Pomerance, “There are infinitely many Carmichael numbers,” Annals of Math. 139 (1994), 703–722.
[Atkins et al.] D. Atkins, M. Graff, A. Lenstra, P. Leyland, “The magic words are squeamish ossifrage,” Advances in Cryptology – ASIACRYPT ’94, Lecture Notes in Computer Science 917, Springer-Verlag, 1995, pp. 263–277.
[Aumasson] J-P. Aumasson, Serious Cryptography: A Practical Introduction to Modern Encryption, No Starch Press, 2017.
[Bard] G. Bard, Sage for Undergraduates, Amer. Math. Soc., 2015.
[Bauer] C.Bauer, Secret History: The Story of Cryptology, CRC Press, 2013.
[Beker-Piper] H. Beker and F. Piper, Cipher Systems: The Protection of Communications, Wiley-Interscience, 1982.
[Bellare et al.] M. Bellare, R. Canetti, and H. Krawczyk, “Keying Hash Functions for Message Authentication,” Advances in Cryptology (Crypto 96 Proceedings), Lecture Notes in Computer Science Vol. 1109, N. Koblitz ed., Springer-Verlag, 1996.
[Bellare-Rogaway] M. Bellare and P. Rogaway, “Random oracles are practical: a paradigm for designing efficient protocols,” First ACM Conference on Computer and Communications Security, ACM Press, New York, 1993, pp. 62–73.
[Bellare-Rogaway2] M. Bellare and P. Rogaway, “Optimal asymmetric encryption,” Advances in Cryptology – EUROCRYPT ’94, Lecture Notes in Computer Science 950, Springer-Verlag, 1995, pp. 92–111.
[Berlekamp] E. Berlekamp, Algebraic Coding Theory, McGraw-Hill, 1968.
[Bernstein et al.] Post-Quantum Cryptography, Bernstein, Daniel J., Buchmann, Johannes, Dahmen, Erik (Eds.), Springer-Verlag, 2009.
[Bitcoin] bitcoin, https:/
[Blake et al.] I. Blake, G. Seroussi, N. Smart, Elliptic Curves in Cryptography, Cambridge University Press, 1999.
[Blom] R. Blom, “An optimal class of symmetric key generation schemes,” Advances in Cryptology – EUROCRYPT’84, Lecture Notes in Computer Science 209, Springer-Verlag, 1985, pp. 335–338.
[Blum-Blum-Shub] L. Blum, M. Blum, and M. Shub, “A simple unpredictable pseudo-random number generator,” SIAM Journal of Computing 15(2) (1986), 364–383.
[Boneh] D. Boneh, “Twenty years of attacks on the RSA cryptosystem,” Amer. Math. Soc. Notices 46 (1999), 203–213.
[Boneh et al.] D. Boneh, G. Durfee, and Y. Frankel, “An attack on RSA given a fraction of the private key bits,” Advances in Cryptology – ASIACRYPT ’98, Lecture Notes in Computer Science 1514, Springer-Verlag, 1998, pp. 25–34.
[Boneh-Franklin] D. Boneh and M. Franklin, “Identity based encryption from the Weil pairing,” Advances in Cryptology – CRYPTO ’01, Lecture Notes in Computer Science 2139, Springer-Verlag, 2001, pp. 213–229.
[Boneh-Joux-Nguyen] D. Boneh, A. Joux, P. Nguyen, “Why textbook ElGamal and RSA encryption are insecure,” Advances in Cryptology – ASIACRYPT ’00, Lecture Notes in Computer Science 1976, Springer-Verlag, 2000, pp. 30–43.
[Brands] S. Brands, “Untraceable off-line cash in wallets with observers,” Advances in Cryptology – CRYPTO’93, Lecture Notes in Computer Science 773, Springer-Verlag, 1994, pp. 302–318.
[Campbell-Wiener] K. Campbell and M. Wiener, “DES is not a group,” Advances in Cryptology – CRYPTO ’92, Lecture Notes in Computer Science 740, Springer-Verlag, 1993, pp. 512–520.
[Canetti et al.] R. Canetti, O. Goldreich, and S. Halevi, “The random oracle methodology, revisited,” Proceedings of the thirtieth annual ACM symposium on theory of computing, ACM Press, 1998, pp. 209–218.
[Chabaud] F. Chabaud, “On the security of some cryptosystems based on error-correcting codes,” Advances in Cryptology – EUROCRYPT’94, Lecture Notes in Computer Science 950, Springer-Verlag, 1995, pp. 131–139.
[Chaum et al.] D. Chaum, E. van Heijst, and B. Pfitzmann, “Cryptographically strong undeniable signatures, unconditionally secure for the signer,” Advances in Cryptology – CRYPTO ’91, Lecture Notes in Computer Science 576, Springer-Verlag, 1992, pp. 470–484.
[Cohen] H. Cohen, A Course in Computational Number Theory, Springer-Verlag, 1993.
[Coppersmith1] D. Coppersmith, “The Data Encryption Standard (DES) and its strength against attacks,” IBM Journal of Research and Development, vol. 38, no. 3, May 1994, pp. 243–250.
[Coppersmith2] D. Coppersmith, “Small solutions to polynomial equations, and low exponent RSA vulnerabilities,” J. Cryptology 10 (1997), 233–260.
[Cover-Thomas] T. Cover and J. Thomas, Elements of Information Theory, Wiley Series in Telecommunications, 1991.
[Crandall-Pomerance] R. Crandall and C. Pomerance, Prime Numbers, a Computational Perspective, Springer-Telos, 2000.
[Crosby et al.] Crosby, S. A., Wallach, D. S., and Riedi, R. H. “Opportunities and limits of remote timing attacks,” ACM Trans. Inf. Syst. Secur. 12, 3, Article 17 (January 2009), 29 pages.
[Damgård et al.] I. Damgård, P. Landrock, and C. Pomerance, “Average case error estimates for the strong probable prime test,” Mathematics of Computation 61 (1993), 177–194.
[Dawson-Nielsen] E. Dawson and L. Nielsen, “Automated Cryptanalysis of XOR Plaintext Strings,” Cryptologia 20 (1996), 165–181.
[Diffie-Hellman] W. Diffie and M. Hellman, “New directions in cryptography,” IEEE Trans. in Information Theory, 22 (1976), 644–654.
[Diffie-Hellman2] W. Diffie and M. Hellman, “Exhaustive cryptanalysis of the NBS data encryption standard,” Computer 10(6) (June 1977), 74–84
[Ekert-Josza] A. Ekert and R. Jozsa, “Quantum computation and Shor’s factoring algorithm,” Reviews of Modern Physics, 68 (1996), 733–753.
[FIPS 186-2] FIPS 186-2, Digital signature standard (DSS), Federal Information Processing Standards Publication 186, U.S. Dept. of Commerce/National Institute of Standards and Technology, 2000.
[FIPS 202] FIPS PUB 202, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, Federal Information Processing Standards Publication 202, U.S. Dept. of Commerce/National Institute of Standards and Technology, 2015, available at http:/.
[Ferguson-Schneier] N. Ferguson and B. Schneier, Practical Cryptography, Wiley, 2003.
[Fortune-Merritt] S. Fortune and M. Merritt, “Poker Protocols,” Advances in Cryptology – CRYPTO’84, Lecture Notes in Computer Science 196, Springer-Verlag, 1985, pp. 454–464.
[Gaines] H. Gaines, Cryptanalysis, Dover Publications, 1956.
[Gallager] R. G. Gallager, Information Theory and Reliable Communication, Wiley, 1969.
[Genkin et al.] D. Genkin, A. Shamir, and E. Tromer, “RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis,” December 18, 2013, available at www.cs.tau.ac.il/∼tromer/papers/acoustic-20131218.pdf
[Gilmore] Cracking DES: Secrets of Encryption Research, Wiretap Politics & Chip Design, Electronic Frontier Foundation, J. Gilmore (editor), O’Reilly and Associates, 1998.
[Girault et al.] M. Girault, R. Cohen, and M. Campana, “A generalized birthday attack,” Advances in Cryptology – EUROCRYPT’88, Lecture Notes in Computer Science 330, Springer-Verlag, 1988, pp. 129–156.
[Goldreich1] O. Goldreich, Foundations of Cryptography: Volume 1, Basic Tools, Cambridge University Press, 2001.
[Goldreich2] O. Goldreich, Foundations of Cryptography: Volume 2, Basic Applications, Cambridge University Press, 2004.
[Golomb] S. Golomb, Shift Register Sequences, 2nd ed., Aegean Park Press, 1982.
[Hankerson et al.] D. Hankerson, A. Menezes, and S. Vanstone, Guide to Elliptic Curve Cryptography, Springer-Verlag, 2004.
[Hardy-Wright] G. Hardy and E. Wright, An Introduction to the Theory of Numbers. Fifth edition, Oxford University Press, 1979.
[Heninger et al.] N. Heninger, Z. Durumeric, E. Wustrow, J. A. Halderman, “Mining your and : Detection of widespread weak key in network devices,” Proc. 21st USENIX Security Symposium, Aug. 2012; available at https:/.
[HIP] R. Moskowitz and P. Nikander, “Host Identity Protocol (HIP) Architecture,” May 2006; available at https:/
[Joux] A. Joux, “Multicollisions in iterated hash functions. Application to cascaded constructions,” Advances in Cryptology – CRYPTO 2004, Lecture Notes in Computer Science 3152, Springer, 2004, pp. 306–316.
[Kahn] D. Kahn, The Codebreakers, 2nd ed., Scribner, 1996.
[Kaufman et al.] C. Kaufman, R. Perlman, M. Speciner, Private Communication in a Public World. Second edition, Prentice Hall PTR, 2002.
[Kilian-Rogaway] J. Kilian and P. Rogaway, “How to protect DES against exhaustive key search (an analysis of DESX),” J. Cryptology 14 (2001), 17–35.
[Koblitz] N. Koblitz, Algebraic Aspects of Cryptography, Springer-Verlag, 1998.
[Kocher] P. Kocher, “Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems,” Advances in Cryptology – CRYPTO ’96, Lecture Notes in Computer Science 1109, Springer, 1996, pp. 104–113.
[Kocher et al.] P. Kocher, J. Jaffe, and B. Jun, “Differential power analysis,” Advances in Cryptology – CRYPTO ’99, Lecture Notes in Computer Science 1666, Springer, 1999, pp. 388–397.
[Konikoff-Toplosky] J. Konikoff and S. Toplosky, “Analysis of Simplified DES Algorithms,” Cryptologia 34 (2010), 211–224.
[Kozaczuk] W. Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War Two; edited and translated by Christopher Kasparek, Arms and Armour Press, London, 1984.
[KraftW] J. Kraft and L. Washington, An Introduction to Number Theory with Cryptography, CRC Press, 2018.
[Lenstra et al.] A. Lenstra, X. Wang, B. de Weger, “Colliding X.509 certificates,” preprint, 2005.
[Lenstra2012 et al.] A. K. Lenstra, J. P. Hughes, M. Augier, J. W. Bos, T. Kleinjung, and C. Wachter, “Ron was wrong, Whit is right,” https:/.
[Lin-Costello] S. Lin and D. J. Costello, Jr., Error Control Coding: Fundamentals and Applications, Prentice Hall, 1983.
[MacWilliams-Sloane] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland, 1977.
[Mantin-Shamir] I. Mantin and A. Shamir, “A practical attack on broadcast RC4,” In: FSE 2001, 2001.
[Mao] W. Mao, Modern Cryptography: Theory and Practice, Prentice Hall PTR, 2004.
[Matsui] M. Matsui,“Linear cryptanalysis method for DES cipher,” Advances in Cryptology – EUROCRYPT’93, Lecture Notes in Computer Science 765, Springer-Verlag, 1994, pp. 386–397.
[Menezes et al.] A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography, CRC Press, 1997.
[Merkle-Hellman] R. Merkle and M. Hellman, “On the security of multiple encryption,” Comm. of the ACM 24 (1981), 465–467.
[Mikle] O. Mikle, “Practical Attacks on Digital Signatures Using MD5 Message Digest,” Cryptology ePrint Archive, Report 2004/356, http:/, 2nd December 2004.
[Nakamoto] S. Nakamoto, ”Bitcoin: A Peer-to-peer Electronic Cash System,” available at https:/
[Narayanan et al.] A. Narayanan, J. Bonneau, E. Felten, A. Miller, S. Goldfeder, Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction (with a preface by Jeremy Clark), Princeton University Press 2016.
[Nelson-Gailly] M. Nelson and J.-L. Gailly, The Data Compression Book, M&T Books, 1996.
[Nguyen-Stern] P. Nguyen and J. Stern, “The two faces of lattices in cryptology,” Cryptography and Lattices, International Conference, CaLC 2001, Lecture Notes in Computer Science 2146, Springer-Verlag, 2001, pp. 146–180.
[Niven et al.] I. Niven, H. Zuckerman, and H. Montgomery, An Introduction to the Theory of Numbers, Fifth ed., John Wiley & Sons, Inc., New York, 1991.
[Okamoto-Ohta] T. Okamoto and K. Ohta, “Universal electronic cash,” Advances in Cryptology – CRYPTO’91, Lecture Notes in Computer Science 576, Springer-Verlag, 1992, pp. 324–337.
[Pfleeger-Pfleeger] C. Pfleeger, S. Pfleeger, Security in Computing. Third edition, Prentice Hall PTR, 2002.
[Pomerance] C. Pomerance, “A tale of two sieves,” Notices Amer. Math. Soc. 43 (1996), no. 12, 1473–1485.
[Quisquater et al.] J.-J. Quisquater and L. Guillou, “How to explain zero-knowledge protocols to your children,” Advances in Cryptology – CRYPTO ’89, Lecture Notes in Computer Science 435, Springer-Verlag, 1990, pp. 628–631.
[Rieffel-Polak] E. Rieffel and W. Polak, “An Introduction to Quantum Computing for Non-Physicists,” available at xxx.lanl.gov/abs/quant-ph/9809016.
[Rosen] K. Rosen, Elementary Number Theory and its Applications. Fourth edition, Addison-Wesley, Reading, MA, 2000.
[Schneier] B. Schneier, Applied Cryptography, 2nd ed., John Wiley, 1996.
[Shannon1] C. Shannon, “Communication theory of secrecy systems,” Bell Systems Technical Journal 28 (1949), 656–715.
[Shannon2] C. Shannon, “A mathematical theory of communication,” Bell Systems Technical Journal, 27 (1948), 379–423, 623–656.
[Shoup] V. Shoup, “OAEP Reconsidered,” CRYPTO 2001 (J. Killian (ed.)), Springer LNCS 2139, Springer-Verlag Berlin Heidelberg, 2001, pp. 239–259.
[Stallings] W. Stallings, Cryptography and Network Security: Principles and Practice, 3rd ed., Prentice Hall, 2002.
[Stevens et al.] M. Stevens, E. Bursztein, P. Karpman, A. Albertini, Y. Markov, “The first collision for full SHA-1,” https://shattered.io/static/shattered.pdf.
[Stinson] D. Stinson, Cryptography: Theory and Practice. Second edition, Chapman & Hall/CRC Press, 2002.
[Stinson1] D. Stinson, Cryptography: Theory and Practice, CRC Press, 1995.
[Thompson] T. Thompson, From Error-Correcting Codes through Sphere Packings to Simple Groups, Carus Mathematical Monographs, number 21, Mathematical Assoc. of America, 1983.
[van der Lubbe] J. van der Lubbe, Basic Methods of Cryptography, Cambridge University Press, 1998.
[van Oorschot-Wiener] P. van Oorschot and M. Wiener, “A known-plaintext attack on two-key triple encryption,” Advances in Cryptology – EUROCRYPT ’90, Lecture Notes in Computer Science 473, Springer-Verlag, 1991, pp. 318–325.
[Wang et al.] X. Wang, D. Feng, X. Lai, H. Yu, “Collisions for hash functions MD-4, MD-5, HAVAL-128, RIPEMD,” preprint, 2004.
[Wang et al. 2] X. Wang, Y. Yin, H. Yu, “Finding collisions in the full SHA1,” to appear in CRYPTO 2005.
[Washington] L. Washington, Elliptic Curves: Number Theory and Cryptography, Chapman & Hall/CRC Press, 2003.
[Welsh] D. Welsh, Codes and Cryptography, Oxford, 1988.
[Wicker] S. Wicker, Error Control Systems for Digital Communication and Storage, Prentice Hall, 1995.
[Wiener] M. Wiener, “Cryptanalysis of short RSA secret exponents,” IEEE Trans. Inform. Theory, 36 (1990), 553–558.
[Williams] H. Williams, Edouard Lucas and Primality Testing, Wiley-Interscience, 1998.
[Wu1] T. Wu, “The secure remote password protocol,” In: Proc. of the Internet Society Network and Distributed Security Symposium, 97–111, March 1998.
[Wu2] T. Wu, “SRP-6: Improvements and refinements to the Secure Remote Password protocol,” 2002; available through http:/
, 445
, 451
, 71
, 57
, 442
, 463
, 459
3DES, 155
absorption, 238
acoustic cryptanalysis, 183
AddRoundKey, 161
ADFGX cipher, 27
Adleman, 171
Aesop, 37
Agrawal, 188
Alice, 2
ASCII, 88
asymptotic bounds, 449
Athena, 299
Atkins, 193
attacks, 3
authenticated key agreement, 292
authenticated key distribution, 295
basis, 421
Batey, 283
Bayes’s theorem, 367
BCH bound, 472
BCH codes, 472
Berlekamp, 483
Berson, 357
Bertoni, 237
Bidzos, 283
bilinear, 409
bilinear Diffie-Hellman, 412
bilinear pairing, 409
binary, 88
binary code, 442
bit, 88
bit commitment, 218
blind signature, 271
block code, 443
Blom key pre-distribution scheme, 294
BLS signatures, 414
Blum-Blum-Shub, 106
Bob, 2
bombes, 33
bounded storage, 90
bounds on codes, 446
breaking DES, 152
burst errors, 480
byte, 88
Caesar cipher, 11
Carmichael number, 80
CBC-MAC, 256
certification hierarchy, 304
certification path, 309
chain rule, 370
challenge-response, 357
characteristic 2, 396
cheating, 354
check symbols, 453
chosen ciphertext attack, 3
chosen plaintext attack, 3
ciphers, 5
ciphertext, 2
ciphertext only attack, 3
Cliff, 299
code, 442
codes, 5
coding gain, 442
coin, 321
composite, 41
conditional entropy, 370
confidentiality, 8
confusion, 119
congruence, 47
convolutional codes, 483
correct errors, 444
coset, 455
counter mode (CTR), 128
CRC-32, 287
cryptanalysis, 1
cryptocurrencies, 329
cryptography, 1
cryptology, 1
cyclic codes, 466
Damgård, 231
Daum, 232
decode, 442
DES Challenge, 153
DES Cracker, 153
DESX, 129
detect errors, 444
deterministic, 249
Di Crescenzo, 417
dictionary attack, 155
digital cash, 320
Ding, 90
discrete logarithm, 74, 84, 211, 228, 249, 272, 324, 352, 361, 363, 391, 399, 410
Disparition, La, 15
divides, 40
dual signature, 315
electronic cash, 9
Electronic Frontier Foundation, 153
electronic voting, 207
elliptic curve cryptosystems, 399
elliptic integral, 386
Ellis, 171
encode, 442
entropy of English, 376
entropy rate, 382
equivalent codes, 445
error correction, 26
error propagation, 119
Eve, 2
everlasting security, 90
existential forgery, 278
expansion permutation, 145
Mantin, 114
Maple, 527
Mariner, 441
MARS, 160
Mathematica, 503
MATLAB, 555
matrices, 61
Matsui, 144
Mauborgne, 88
Maurer, 90
Merkle-Damgård, 231
message digest, 226
message recovery scheme, 273
Mikle, 227
Miller, 384
minimum distance, 444
mining, 327
MIT, 299
mod, 47
modes of operation, 122
Morse code, 372
MOV attack, 410
multiple encryption, 129
Paillier cryptosystem, 206
Painvin, 28
pairing, 409
parity check, 438
Peeters, 237
Peggy, 357
Pell’s equation, 83
Persiano, 417
Pfitzmann, 228
plaintext-aware encryption, 181
Playfair cipher, 26
point at infinity, 385
poker, 351
polarization, 489
Pollard, 188
post-quantum cryptography, 435
PostScript, 232
preimage resistant, 226
Pretty Good Privacy (PGP), 309
PRGA, 114
primality testing, 183
prime, 41
primitive root of unity, 472
probabilistic, 249
probability, 365
provable security, 94
pseudoprime, 185
pseudorandom bits, 105
Public Key Infrastructure (PKI), 303
Róycki, 29
random oracle model, 251
random variable, 366
RC6, 160
reduced basis, 422
redundancy, 379
Reed-Solomon codes, 479
registration authority (RA), 304
relatively prime, 42
Rijmen, 160
Rijndael, 160
root of unity, 472
rotation, 230
rotor machines, 29
round key, 165
RoundKey addition, 164
RSA challenge, 192
run length coding, 381
Safavi-Naini, 415
Sage, 591
Saxena, 188
Schacham, 414
Scherbius, 29
Schnorr identification scheme, 363
secret splitting, 340
Secure Electronic Transaction (SET), 314
Secure Remote Password (SRP) protocol, 258
Security Sockets Layer (SSL), 312
self-dual code, 457
sequence numbers, 296
Serge, 299
Serpent, 160
SHA-3, 237
SHAKE, 238
Shamir threshold scheme, 341
shortest vector problem, 422
side-channel attacks, 183
signature with appendix, 273
Singleton bound, 446
singular curves, 395
smooth, 394
Solovay-Strassen, 187
sphere packing bound, 447
sponge function, 237
squeamish ossifrage, 194
squeezing, 238
state, 237
station-to-station (STS) protocol, 292
stream cipher, 104
strongly collision resistant, 226
Susilo, 415
Sybil attack, 338
syndrome decoding, 456
systematic code, 453
ternary code, 442
threshold scheme, 341
ticket-granting service, 300
timestamps, 296
timing attacks, 181
Transmission Control Protocol (TCP), 483
Transport Layer Security (TLS), 312
treaty verification, 194
Trent, 300
triangle inequality, 444
tripartite Diffie-Hellman, 411
Turing, 33
two lists, 180
two-dimensional parity code, 438
Twofish, 160
The illustration shows Alice is connected to encrypt with an arrow labeled as plaintext pointed towards encrypt. Encrypt is connected to decrypt with an arrow labeled as ciphertext. Decrypt is connected to Bob. Encrypt is connected with Encryption Key with an arrow pointed downward Encrypt. Decrypt is connected with Decryption Key with an arrow pointed downward Decrypt. The arrow between Encrypt and Decrypt is connected with Eve with an arrow pointed downward.
The illustration shows a box with three rows. In the rows, the letters are shown with their frequencies.
In the first row, a: .082, b: .015, c: .028, d: .043, e: .127, f: .022, g: .020, h: .061, i: .070, j:.002. In the second row, k: .008, l: .040, m: .024, n:.067, o: .075, p: .019, q: .001, r: .060, s: .063, t: .091.
In the third row, u: .028, v: .010, w: .023, x: .001, y: .020, z: .001.
The illustration shows a box. In the box, the letters are shown with their frequencies, e: .127, t: .091, a: .082, o: .075, i: .070, n: .067, s: .063, h: .061, r: .060.
The table shows counting diagrams with 9 rows labeled as W, B, R, S, I, V, A, P, N and 9 columns labeled as W, B, R, S, I, V, A, P, N. The table shows various corresponding values for each row and each column.
The table shows 5 versus 5 matrix table with row labeled as A, D, F, G, X and columns labeled as A, D, F, G, X. The table shows various corresponding values for each column and each row.
The table shows a matrix table with 5 columns labeled as R, H, E, I, N. The table shows various corresponding various values for each column.
The table shows a matrix table with 5 columns labeled as E, H, I, N, R. The table shows various corresponding values for each column.
The illustration shows four rectangular boxes labeled as R, L, M, N respectively placed vertically parallel to each other. The illustration shows a square box labeled as S beyond the four boxes. S is connected to N, N is connected to M, M is connected to L, L is connected to R with an arrow and vice versa. Beyond S, a rectangular box labeled as K, keyboard is connected to S with an arrow at the very lower position and a bar with four bulbs labeled as glow lamps is connected to S with an arrow at the very upper position.
A table shows 6 rows and 6 columns for addition mod 6. The table contains various values for each rows and columns.
A table shows 6 rows and 6 columns for multiplication mod 6. The table contains various values for each rows and columns.
The table shows various symbols and their decimal and binary number in five numbers of rows. The first row contains the symbol exclamation : 33 : 0100001, double inverted : 34 : 0100010, hash : 35 : 0100011, Dollar : 36 : 0100100, percentage : 37 : 0100101, and : 38 : 0100110, single inverted : 39 : 0100111. The second row contains the symbols open bracket : 40 : 0101000, closed bracket : 41 : 0101001, asterisk : 42 : 0101010, plus : 43 : 0101011, coma : 44 : 0101100, minus : 45 : 0101101, dot : 46 : 0101110, slash: 47 : 0101111. The third row contains 0 : 48 : 0110000, 1 : 49 : 011000, 2 : 50 : 0110010, 3 : 51 : 0110011, 4 : 52 : 0110100, 5 : 53 : 0110101, 6 : 54 : 011010, 7 : 55 : 0110111. The fourth row contains 8 : 56 : 0111000, 9 : 57 : 0111001, colon : 58 : 0111010, semi colon : 59 : 0111011, less than : 60 : 0111100, equal : 61 : 0111101, greater than : 62 : 0111110, question mark : 63 : 0111111. The last row contains the symbols at the rate : 64 : 1000000, A : 65 : 1000001, B : 66 : 1000010, C : 67 : 1000011, D : 68 : 1000100, E : 69 : 1000100, F : 70 : 1000110, G : 71 : 1000111.
The block diagram of a stream cipher encryption shows an arrow pointed to x subscript n plus 2, x subscript n plus 2 is connected to x subscript n plus 1 with an arrow, and then x subscript n plus 1 is connected to x subscript n with an arrow which further connects to an adder. Then, an arrow points towards p subscript n plus 2 which is connected to p subscript n plus 1 with an arrow, and then p subscript n plus 1 is points to p subscript n with an arrow which further connects to an adder. The output of the adder is fed to c subscript n, from which an output is obtained.
The image of a linear feedback shift register satisfying x subscript n plus 3 equals x subscript n plus 1 plus x subscript n shows that x subscript n plus 2 is connected to x subscript n plus 1 with an arrow which is further connected to x subscript n with an arrow. x subscript n plus 2 and x subscript n is further fed to an adder and the output of the adder is again fed to x subscript n plus 2. x subscript n and plaintext are fed to another adder from which the output ciphertext is obtained.
The first block contains P subscript 1 which represents the first plain text which is then connected to block E subscript K which represents encryption function. Block C subscript 0 which represents initialization vector is connected to block P subscript 1 and E subscript K through a connector. Block E subscript K is then connected to block C subscript 1 which represents the first ciphertext. Block that contains P subscript 2 which represents second plain text is connected to E subscript K through a connector. Block C subscript 1 is connected to the connector that connects block P subscript 2 and block E subscript K. Again, the block E Subscript K connects the block C subscript 2 which represents the second ciphertext from where the arrow connects to dot-dot-dot.
The illustration shows eleven blocks connected to each other. The first block labeled as X subscript 1 connects to the second block E subscript which again, connects to the block that contains O subscript 1 in a register that denotes 8 leftmost bits that connect with a connector P subscript 1 that shows 8 bits and C subscript 1 is connected to the connector. Again, C subscript 1 is connected to register that contains C subscript 1 present in rightmost side of the block X subscript 2 that connects the block X subscript 2 and shifts the rightmost register labeled as drop to the leftmost side that is represented by downward arrows which connects to the block E subscript K which again connects to the block that contains O subscript 2 in a register that denotes 8 leftmost bits that connects with a connector P subscript 2 that shows 8 bits and C subscript 2 is connected to the connector. Again C subscript 2 is connected to register that contains C subscript 2 present in rightmost side of the block X subscript 2 that connects the block X subscript 3 and shifts the rightmost register labeled as drop to the leftmost side that is represented by downward arrows which connects to the block E subscript K which again connects to the block that contains O subscript 3 in a register that denotes 8 leftmost bits that connects with a connector P subscript 2 that shows 8 bits and C subscript 3 is connected to the connector which is further connected to dot-dot-dot. The first block X subscript 1 is connected to the next X subscript 1 block that contains O subscript 1 and X subscript 2 block is connected to the next X subscript 2 block that contains O subscript 2.
The illustration shows eleven blocks connected to each other. The first block labeled as X subscript 1 connects to the second block E subscript which again connects to the block that contains O subscript 1 in a register that denotes 8 leftmost bits that connect with a connector P subscript 1 that shows 8 bits and C subscript 1 is connected to the connector. Again, O subscript 1 is connected to register that contains O subscript 1 present in rightmost side of the block X subscript 1 that connects the block X subscript 2 and shifts the rightmost register labeled as drop to the leftmost side that is represented by downward arrows which connects to the block E subscript K which again connects to the block that contains O subscript 2 in a register that denotes 8 leftmost bits that connects with a connector P subscript 2 that shows 8 bits and C subscript 2 is connected to the connector. Again O subscript 2 is connected to register that contains O subscript 2 present in rightmost side of the block X subscript 2 that connects the block X subscript 3 and shifts the rightmost register labeled as drop to the leftmost side that is represented by downward arrows which connects to the block E subscript K which again connects to the block that contains O subscript 3 in a register that denotes 8 leftmost bits that connects with a connector P subscript 2 that shows 8 bits and C subscript 3 is connected to the connector which is further connected to dot-dot-dot. The first block X subscript 1 is connected to the next X subscript 1 block that contains O subscript 1 and X subscript 2 block is connected to the next X subscript 2 block that contains O subscript 2.
The illustration shows nine blocks that are separated. The first block labeled as X subscript 1 connects to the second block E subscript which again connects to the block that contains O subscript 1 in a register that denotes 8 leftmost bits that connect with a connector P subscript 1 that shows 8 bits and C subscript 1 is connected to the connector. Block X subscript 2 equals X subscript 1 plus 1 connects to the block E subscript K which again connects to the block that contains O subscript 2 in a register that denotes 8 leftmost bits that connect with a connector P subscript 2 that shows 8 bits and C subscript 2 is connected to the connector. Block X subscript 3 equals X subscript 2 plus 1 connects to the block E subscript K which again connects to the block that contains O subscript 3 in a register that denotes 8 leftmost bits that connect with a connector P subscript 2 that shows 8 bits and C subscript 3 is connected to the connector which is further connected to dot-dot-dot.
The illustration shows two inputs K subscript i and R subscript i minus l fed to a feedback which is labeled as f. Output of feedback is fed to an adder which is denoted as XOR. A output R subscript i is obtain from the adder. Another input L subscript I minus l is fed to the adder. An arrow is shown from R subscript i minus l which is labeled as L subscript i.
The illustration shows six numbers, 1, 2, 3, 4, 5, 6 on the upper row and eight numbers 1, 2, 4, 3, 4, 3, 5, 6 on lower row respectively. Several arrows are shown between 1 to 1, 2 to 2, 4 to 4, 3 to 3, 4 to 4, 3 to 3, 5 to 5, 6 to 6 from upper row to bottom row.
Flow chart shows an input R subscript i minus l which is fed to E left parenthesis R subscript i minus l right parenthesis. Output of E left parenthesis R subscript i minus l right parenthesis is fed to an adder. Adder output is subdivided into two parts 4 bits and 4 bits. Output of 4bits and 4 bits are connected to S subscript 1 and S subscript 2 respectively. Output of the S subscript 1 and S subscript 2 is connected to f left parenthesis R subscript i minus l, K subscript i right parenthesis. On the right side, another input is fed to an adder which is labeled as K subscript i.
Table shows two columns, first four bits along with their frequencies, the values are labeled as 0000: 12, 1000: 33, 0001: 7, 1001: 40, 0010: 8, 1010: 35, 0011: 15, 1011: 35, 0100: 4, 1100: 59, 0101: 3, 1101: 32, 0110: 4, 1110: 28, 0111: 6 and 1111: 39.
Another table shows two columns, last four bits along with their frequencies, the values are labeled as 0000: 14, 1000: 8, 0001: 6, 1001: 16, 0010: 42, 1010: 8, 0011: 10, 1011: 18, 0100: 27, 1100: 8, 0101: 10, 1101: 23, 0110: 8, 1110: 6, 0111: 11, and 1111: 17.
An input Plaintext is fed to IP. The output of IP is subdivided in two parts L subscript 0 and R subscript 0 respectively. The output of L subscript 0 is connected to an adder whereas the output of R subscript 0 is connected to the adder through a feedback f, where an input K subscript 1 is fed. Output of the adder is connected to R subscript 1 and same of R subscript 0 is connected to L subscript 1. The output of L subscript 1 is connected to an adder whereas the output of R subscript 1 is connected to the adder through a feedback f, where an input K subscript 2 is fed. Output of the adder is connected to R subscript 2 and same of R subscript 1 is connected to L subscript 2. L subscript 2 and R subscript 2 are further connected to L subscript 15 and R subscript 15 respectively. The output of L subscript 15 is connected to an adder whereas the output of R subscript 15 is connected to the adder through a feedback f, where an input K subscript 16 is fed. Output of the adder is connected to R subscript 16 and same of R subscript 15 is connected to L subscript 16. The outputs obtained from R subscript 16 and L subscript 16 are fed to IP superscript negative 1 which is further fed to ciphertext.
The values are labeled as follows:
Row 1: 58 50 42 34 26 18 10 2 60 52 44 36 28 20 12 4
Row 2: 62 54 46 38 30 22 14 6 64 56 48 40 32 24 16 8
Row 3: 57 49 41 33 25 17 9 1 59 51 43 35 27 19 11 3
Row 4: 61 53 45 37 29 21 13 5 63 55 47 39 31 23 15 7
The values are labeled as follows:
Row 1: 32 1 2 3 4 5 4 5 6 7 8 9
Row 2: 8 9 10 11 12 13 12 13 14 15 16 17
Row 3: 16 17 18 19 20 21 20 21 22 23 24 25
Row 4: 24 25 26 27 28 29 28 29 30 31 32 1
The values are labeled as follows:
Row 1: 16 7 20 21 29 12 28 17 1 15 23 26 5 18 31 10
Row 2: 2 8 24 14 32 27 3 9 19 13 30 6 22 11 4 25
A flow chart starts with an input R subscript i minus 1 which is fed to Expander. The output of expander is fed to E left parenthesis R i minus 1 right parenthesis. The output of E left parenthesis R i minus 1 right parenthesis and another input K subscript i are fed to an adder. The output of the adder is subdivided into eight 6 bits data which are labeled as B subscript 1, B subscript 2, B subscript 3, B subscript 4, B subscript 5, B subscript 6, B subscript 7 and B subscript 8 respectively. The outputs of B subscript 1, B subscript 2, B subscript 3, B subscript 4, B subscript 5, B subscript 6, B subscript 7 and B subscript 8 respectively are fed to S subscript 1, S subscript 2, S subscript 3, S subscript 4, S subscript 5, S subscript 6, S subscript 7 and S subscript 8 respectively. Again the outputs of S subscript 1, S subscript 2, S subscript 3, S subscript 4, S subscript 5, S subscript 6, S subscript 7 and S subscript 8 respectively are fed to eight 4 bits data which are labeled as C subscript 1, C subscript 2, C subscript 3, C subscript 4, C subscript 5, C subscript 6, C subscript 7 and C subscript 8 respectively. All the outputs are fed to permutation. The output of permutation is fed to f left parenthesis R subscript i minus 1 comma K i right parenthesis.
The values are labeled as follows:
Row 1: 57 49 41 33 25 17 9 1 58 50 42 34 26 18
Row 2: 10 2 59 51 43 35 27 19 11 3 60 52 44 36
Row 3: 63 55 47 39 31 23 15 7 62 54 46 38 30 22
Row 4: 14 6 61 53 45 37 29 21 13 5 28 20 12 4
The values are labeled as follows:
Round: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Shift: 1 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1
A table shows information of 48 bits chosen from the 56-bit string C subscript i D subscript i and the output are K subscript i. The table shows four rows and twelve columns.
Row 1: 14, 17, 11, 24, 1, 5, 3, 28, 15, 6, 21 and 10.
Row 2: 23, 19, 12, 4, 26, 8, 16, 7, 27, 20, 13 and 2.
Row 3: 41, 52, 31, 37, 47, 55, 30, 40, 51, 45, 33 and 48.
Row 4: 44, 49, 39, 56, 34, 53, 46, 42, 50, 36, 29 and 32.
A table shows information of S-Boxes. The table divided into eight parts depicted as S-box 1, S-box 2, S-box 3, S-box 4, S-box 5, S-box 6, S-box 7 and S-box 8 where each table shows four rows and 16 columns with several numbers.
A box labeled as plaintext is linked with the box AddRoundKey, The box AddRoundKey is labeled as W left parenthesis 0 right parenthesis, W left parenthesis 1 right parenthesis, W left parenthesis 2 right parenthesis, W left parenthesis 3 right parenthesis with a left inward arrow. Below the box AddRoundKey, a block with four boxes labeled as Round 1. In the block the first box is SubBytes which is linked with the below box labeled as ShiftRows. Again the ShiftRows is linked with MixColumns and lastly the MixColumns box is linked with AddRoundKey which is labeled as W left parenthesis 4 right parenthesis, W left parenthesis 5 right parenthesis, W left parenthesis 6 right parenthesis, W left parenthesis 7 right parenthesis with a left inward arrow. Again a block with four boxes labeled as Round 9, starting with the first box labeled as SubBytes which is linked with ShiftRows. Again ShiftRows is linked with MixColumns and lastly the MixColumns box is linked with AddRound Key which is labeled as W left parenthesis 36 right parenthesis, W left parenthesis 37 right parenthesis, W left parenthesis 38 right parenthesis, W left parenthesis 39 right parenthesis with a left inward arrow. Again a block with three boxes labeled as Round 10, starting with the first box labeled as SubBytes which is linked with ShiftRows. Again ShiftRows is linked with AddRound Key which is labeled as W left parenthesis 40 right parenthesis, W left parenthesis 41 right parenthesis, W left parenthesis 42 right parenthesis, W left parenthesis 43 right parenthesis with a left inward arrow. Below the block, a box is linked labeled as Ciphertext.
A table shows the information of Rijndael Encryption. The data listed for:
Line 1: ARK, using the 0th round key.
Line 2: Nine rounds of SB, SR, MC, ARK, using round keys 1 to 9.
Line 3: A final round: SB, SR, ARK, using the 10th round key.
A table shows S-Box for Rijndael that shows various numerical values. The values are listed as
99 124 119 123 242 107 111 197 48 1 103 43 254 215 171 118
202 130 201 125 250 89 71 240 173 212 162 175 156 164 114 192
183 253 147 38 54 63 247 204 52 165 229 241 113 216 49 21
4 199 35 195 24 150 5 154 7 18 128 226 235 39 178 117
9 131 44 26 27 110 90 160 82 59 214 179 41 227 47 132
83 209 0 237 32 252 177 91 106 203 190 57 74 76 88 207
208 239 170 251 67 77 51 133 69 249 2 127 80 60 159 168
81 163 64 143 146 157 56 245 188 182 218 33 16 255 243 210
205 12 19 236 95 151 68 23 196 167 126 61 100 93 25 115
96 129 79 220 34 42 144 136 70 238 184 20 222 94 11 219
224 50 58 10 73 6 36 92 194 211 172 98 145 149 228 121
231 200 55 109 141 213 78 169 108 86 244 234 101 122 174 8
186 120 37 46 28 166 180 198 232 221 116 31 75 189 139 138
112 62 181 102 72 3 246 14 97 53 87 185 134 193 29 158
225 248 152 17 105 217 142 148 155 30 135 233 206 85 40 223
140 161 137 13 191 230 66 104 65 153 45 15 176 84 187 22
A table shows the information of Rijndael Decryption. The data listed for
line 1: ARK, using the 10th round key.
Line 2: Nine rounds of ISB, ISR, IMC, IARK, using round keys 9 to 1.
Line 3: A final round: ISB, ISR, ARK, using the 0th round key.
A table shows information of The RSA Algorithm. The data listed for
Line 1: Bob chooses secret primes p and q and computes n equal pq.
Line 2: Bob chooses e with gcd left parenthesis e comma left parenthesis p minus 1 right parenthesis left parenthesis q minus 1 double right parenthesis equals 1.
Line 3: Bob computers d with de-triple equal 1 left parenthesis mod left parenthesis p minus 1 right parenthesis left parenthesis q minus 1 double right parenthesis.
Line 4: Bob makes n and e public comma and keeps p comma q comma d secret.
Line 5: Alice encrypts m as c triple equal m superscript e left parenthesis mod n right parenthesis and sends c to Bob.
Line 6: Bob decrypts by computing m triple equal c superscript d left parenthesis mod n.
A table shows information of RSA Encryption Exponents. The data listed for e and percentage as follows:
Column 1: 65537, 17, 41, 3, 19, 25, 5, 7, 11, 257 and others.
Column 2: 95.4933 percent, 3.1035 percent, 0.4574 percent, 0.3578 percent, 0.1506 percent, 0.1339 percent, 0.1111 percent, 0.0596 percent, 0.0313 percent, 0.0241 percent and 0.0774 percent.
A table shows information of Factorization Records. The data listed for year and number of digits as follows:
Column 1: 1964, 1974, 1984, 1994, 1999, 2003, 2005 and 2009.
Column 2: 20, 45, 71, 129, 155, 174, 200 and 232.
The long message is an arbitrary message like …… 01101001… which pass through the hash function and produces the output message digest of fixed length 256 bit- 11…10.
Initially, a value IV is fed as input in the first block f. A two-bit string M sub 0 is first fed into the block. The blocks are then fed one-by-one into f. The message M goes from M sub 0 to up to M sub left parenthesis n-1right parenthesis. The final output is the hash value H left parenthesis M right parenthesis.
r is taken as the rate and c is taken as the capacity. First a block 0 0 is introduced. First 0 is the rate r and second o is the capacity c. Rate r occupies maximum part of the block. Then the message signal M subscript 0 is transferred to block f from where the message is transferred back to rate and capacity block and so on till message signal M subscript n minus 1. This is the absorbing part. The diagram is then separated by a dotted line after which the squeezing part starts. Here the outputs Z subscript 0 and so on are squeezed out of the rate and capacity and f blocks.
An illustration shows three blocks labeled as Data Record k minus 1, Data Record k, and Data Record k + 1 and each block contains a data block, h left parenthesis right parenthesis, and a pointer. A vertical box with h left parenthesis right parenthesis and a pointer is shown at the upper top right. The vertical box is linked with the third block with the help of an arrow. Again the third block is linked with the second box and also the second box is linked with the first block with the help of an arrow.
An illustration shows a tree diagram. The first block of the tree diagram is h left parenthesis right parenthesis. The first block is linked with the second block with a downward arrow and the second block has h left parenthesis right parenthesis h left parenthesis right parenthesis. The second block is divided into two child blocks labeled as h left parenthesis right parenthesis h left parenthesis right parenthesis and h left parenthesis right parenthesis h left parenthesis right parenthesis. Again the block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as h left parenthesis right parenthesis h left parenthesis right parenthesis and h left parenthesis right parenthesis h left parenthesis right parenthesis. Also, the block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as h left parenthesis right parenthesis h left parenthesis right parenthesis and h left parenthesis right parenthesis h left parenthesis right parenthesis. The block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as R0 and R1. The block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as R2 and R3. The block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as R4 and R5. The block h left parenthesis right parenthesis h left parenthesis right parenthesis is subdivided into two child blocks labeled as R6 and R7.
The illustration shows a basic Kerberos model with the participants Cliff, Serge, Trent and Grant, where Cliff is at the center which is linked to Trent, Grant and Serge. Cliff sends a message to Trent indicated by an arrow numbered 1. Trent replies the message to Cliff again indicated by an arrow numbered 2. Cliff sends a message to Grant indicated by an arrow numbered 3. Grant replies the message to Cliff again indicated by an arrow numbered 4. Cliff sends a final message to Serge indicated by an arrow numbered 5.
The illustration shows a block diagram of a Certification Hierarchy. The block “CA” abbreviated for Certification Authority, is characterized as Client, Client and “RA” abbreviated for Registration Authorities. The block “RA” is again sub-divided into three Clients.
The screenshot shows a CA’s Certificate; General.
Line 1: This certificate has been verified for the following uses, colon
Line 2: Email Signer Certificate
Line 3: Email Recipient Certificate
Line 4: Status Responder Certificate
Line 5: Issued to, colon
Line 6: Organization left parenthesis O right parenthesis, colon, VeriSign, Inc
Line 7: Organizational Unit left parenthesis OU right parenthesis, colon, Class 1 Public Primary Certification Authority dash G2
Line 8: Serial Number
Line 9: Issued by, colon
Line 10: Organization left parenthesis O right parenthesis, colon, VeriSign, Inc
Line 11: Organizational Unit left parenthesis OU right parenthesis, colon, Class 1 Public Primary Certification Authority dash G2
Line 12: Validity, colon
Line 13: Issues on, colon, 05 forward slash 17 forward slash 98
Line 14: Expires on, colon, 05 forward slash 18 forward slash 98
Line 15: Fingerprints, colon
Line 16: SHA1 Fingerprint, colon
Line 17: MD5 Fingerprint, colon
The screenshot shows a CA’s Certificate; Details.
Line 1: Certificate Hierarchy
Line 2: Verisign Class 1 Public Primary Certification Authority dash G2
Line 3: Certificate Fields
Line 4: Verisign Class 1 Public Primary Certification Authority dash G2
Line 5: Certificate
Line 6: Version, colon, Version 1
Line 7: Serial Number, colon
Line 8: Certificate Signature Algorithms, colon, PKCS hashtag 1 SHA dash 1 With RSA Encryption
Line 9: Issuer, colon, OU equals VeriSign Trust Network
Line 14: Validity
Line 15: Not before, colon, 05 forward slash 17 forward slash 98
Line 16: Not after, colon, 05 forward slash 18 forward slash 98
Line 17: Subject, colon, OU equals VeriSign Trust Network
Line 22: Subject Public Key Info, colon, PKCS hashtag 1 RSA Encryption
Line 23: Subject’s Public Key, colon
Line 24: A table of 9 rows and 16 columns is shown
Line 25: Certificate Signature Algorithm, colon, PKCS hashtag 1 SHA dash 1 With RSA Encryption
Line 26: Certificate Signature Name, colon
Line 27: A table of 8 rows and 16 columns is shown.
The screenshot shows a Clients Certificate.
Line 1: Certificate Hierarchy
Line 2: Verisign Class 3 Public Primary CA
Line 3: Certificate Fields
Line 4: Verisign Class 3 Public Primary Certification Authority
Line 5: Certificate
Line 6: Version, colon, Version 3
Line 7: Serial Number, colon
Line 8: Certificate Signature Algorithms, colon, md5RSA
Line 9: Issuer, colon, OU equals www.verisign.com forward slash CPS Incorp.
Line 14: Validity
Line 15: Not before, colon, Sunday, September 21, 2003
Line 16: Not after, colon, Wednesday, September 21, 2005
Line 17: Subject, colon, CN equals online.wellsfargo.com
Line 22: Subject Public Key Info, colon, PKCS hashtag 1 RSA Encryption
Line 23: Subject’s Public Key, colon, 30 81 89 02 81 81 00 a9
Line 24: Basic Constraints, colon, Subject Type equals End Entity, Path Length Constraint equals None
Line 25: Subject's Key Usage, colon, Digital Signature, Key Encipherment left parenthesis AO right parenthesis
Line 26: CRL Distribution Points, colon
Line 27: Certificate Signature Algorithm, colon, MD5 With RSA Encryption
Line 28: Certificate Signature Value, colon
A table represents Block ID colon 76 Create Coins. The table contains 4 rows and 3 columns. The Block ID colon 76 Create Coins is divided into three columns labeled as Trans ID with values 0, 1 and 2, Value with values 5, 2 and 10 and Recipient with values PK subscript BB, PK subscript Alice and PK subscript Bob.
A table represents Block ID colon 77 Consume Coins colon 41 left parentheses 2 right parentheses, 16 left parentheses 0 right parentheses, 31 left parentheses 1 right parentheses, Create Coins. The table contains 6 rows and 3 columns. The Block ID colon 77 Consume Coins colon 41 left parentheses 2 right parentheses, 16 left parentheses 0 right parentheses, 31 left parentheses 1 right parentheses, Create Coins is divided into three columns labeled as Trans ID with values 0, 1, 2 and 3, Value with values 15, 5, 4 and 11 and Recipient with values PK subscript Sarah Store, PK subscript Alice, PK subscript Bob and PK subscript Charles.
The block diagram of Bit coin’s block chain and a Merkle tree of transactions three blocks in adjacent position. Each block has four lines of codes. The third block connects to the second and the second block connects to the first.
Line 1: prev underscore hash tag, colon, h left parenthesis right parenthesis
Line 2: timestamp
Line 3: merkleroot, colon, h left parenthesis right parenthesis
Line 4: nonce
Another block is placed just below the second block, which is comprised of one sub block, with a line: h left parenthesis right parenthesis, indented, h left parenthesis right parenthesis. Below the first, there are two other sub-blocks with lines in each block as: h left parenthesis right parenthesis, indented, h left parenthesis right parenthesis, connected with two arrows with the first block. The above two sub-blocks are connected to four sub-blocks represented as TX.
The box is having a uniform thickness highlighted with black. Another solid rectangle is present inside the tunnel and below it a door is made.
Illustration shows the Tunnel Used in the Zero-Knowledge Protocol. The tunnel is represented by a rectangular box with an opening at the top. Inside it, another tunnel is made by rectangular box with an opening at the top. Both the boxes are having a uniform thickness highlighted with black. Another solid rectangle is present inside the tunnel and a Central Chamber is presented below it.
A schematic diagram of Huffman Encoding, showing outputs a, b, c and d, arranged vertically. Output c corresponding to 0.1 and d corresponding to 0.1 gives 0.2, with two choices 0 and 1. Again, 0.2 and b corresponding 0.3 gives 0.5, with two choices 0 and 1. Further, 0.5 and a corresponding 0.5 gives 1, with two choices 0 and 1.
The illustration shows Shannon's Experiment on the Entropy of English. There are six sentences “there is no reverse”, “on a motorcycle a”, “friend of mine found”, “this out rather”, “dramatically the” and “other day”. Below each letter, information obtained is displayed.
The x-axis ranges from negative 1 to 3, in increments of 1 and the y-axis ranges from negative 4 to 4, increments of 2. The first curve begins at the first quadrant, passes decreasing through point 1 of the x-axis, then ends decreasing at the fourth quadrant making a peak point at point 1 of the x-axis.
The x-axis ranges from negative 4 to 8, in increments of 2 and the y-axis ranges from negative 20 to 20, in increments of 10. The curve starts from the first quadrant passing through the point 10 of the y-axis, then passes through the second quadrant decreasing through the point negative 4 of the x-axis intersecting the point negative 10 and ends decreasing at the fourth quadrant.
The graph shows a curve, a straight line and another doted straight line that is parallel to the y-axis and it is plotted in the first and second quadrant. The curve begins at the first quadrant passes decreasing through the y-axis, enters the second quadrant passes through the x-axis, enters the third quadrant passes through the y-axis and ends decreasing at the fourth quadrant.
The illustration shows a straight line segment and two vectors. The first vector labeled as v subscript 1 lies along the line segment pointing toward the right direction. The second vector labeled as v subscript 2 starts from the beginning point of the first vector, pointing upwards and slightly deflexed towards right from the angle 90 degrees. The second line lies between two dotted lines perpendicular to the line segment.
The illustration shows a straight line segment and two vectors. The first vector labeled as v subscript 1 lies along the line segment pointing toward the right direction. The second vector labeled as v subscript 2 starts from the beginning point of the first vector, pointing upwards and deflexed at angle 60 degrees towards the right. The second vector lies between two dotted lines perpendicular to the line segment and the vector crosses the perpendicular line at the very right.
The illustration shows an input message which is fed to the encoder, encoder output is fed into the noisy channel through codewords. Noisy channel output is fed to decoder and decoder output is fed to message.
The x-axis ranges from 0.1 to 0.5, in increments of 0.1 and the y-axis labeled as code rate ranges from 0.2 to 1, in increments of 0.2. The graph shows a non-linear decreasing curve starting from point 1 of the y-axis and ends at point 0.5 on the x-axis.
The illustration shows a white rectangular bar labeled as light source with a vertical edge at the right end. A black rectangular bar with a vertical edge labeled as Polaroid A at the right end which is inclined to a grey rectangular bar with a vertical black edge is shown. An arrow labeled as light pointing towards right is shown below the illustration.
The illustration shows a white rectangular bar labeled as light source with a vertical edge at the right end. A black rectangular bar with a vertical edge labeled as Polaroid A at the right end which is inclined to a grey rectangular bar with a vertical edge labeled as Polaroid C and a black vertical edge is are shown. An arrow labeled as light pointing towards right is shown below the illustration.
The illustration shows a white rectangular bar labeled as light source with a vertical edge at the right end. A black rectangular bar with a vertical edge labeled as Polaroid A at the right end which is inclined to a grey rectangular bar with a vertical edge labeled as Polaroid B is shown. The Polaroid B is inclined with a gray rectangular bar with a vertical bar labeled as Polaroid C which is inclined with a grey rectangular bar with a vertical black edge is shown. An arrow labeled as light pointing towards right is shown below the illustration.
The image shows a graph of y versus x. The x axis ranges from 0 to 7, in increments of 1 and the y-axis ranges from 0 to 1, in increments of 0.2. Following coordinates (0, 0.15), (1, 0.15), (2, 0.35), (3, 0.85), (4, 0.35), (5, 0.85), (6, 0.35) and (7, 0.15) are marked.
The image shows a graph of y versus x. The x axis ranges from 0 to 14, in increments of 2 and the y-axis ranges from 0 to 1, in increments of 0.2. Following coordinates (0, 1), (1, 0.2), (2, 0.3), (3, 0.9), (4, 0), (5, 0.2), (6, 0.65), (7, 0.3), (8, 0), (9, 0.3), (10, 0.65), (11, 0.2), (12, 0), (13, 0.9), (14, 0.25) and (15, 0.2) are marked.
The image shows a graph of y versus x. The x axis ranges from 0 to 500, in increments of 100 and the y-axis ranges from 0 to 0.2, in increments of 0.05. The graph starts from y equals 0.01, is comprised of four peaks with an altitude of y equals 0.02. The graph is comprised of numerous dots, whose density decreases towards the peaks.
A graph ranges negative 4 to 4 in increment of 2 in vertical axis and negative 1 to 3 in increment of 1 in horizontal axis. The graph plots an oval shaped curve at point 0 on vertical axis. Another elliptic curve is plotted that starts from 3 on horizontal axis, vertex lies at (1, 0) and extends to (3, 5).
A y versus x graph ranges from negative 4 to 4 in increment of 2 in y axis and negative 1 to 3 in increment of 1 in x axis. The graph plots an oval shaped curve whose vertex lies at the origin and (negative 1, 0). Another elliptic curve is plotted which is symmetric about the x axis and vertex lies at (1, 0).
A graph ranges negative 5 to 5 in increment of 1 in vertical axis and negative 1 to 3 in increment of 0.5 in horizontal axis. The graph plots an oval shaped curve at point 0 on vertical axis. Another elliptic curve is plotted that starts from 3 on horizontal axis, vertex lies at (1, 0) and extends to (3, 5). An equation is shown above the graph depicting y subscript 2 minus x times left parenthesis x minus 1 right parenthesis times left parenthesis x plus 1 right parenthesis equals 0.